In which I waste an entire weekend modeling one line of Regional Rail in AM peak

A draft of the MBTA’s “Focus 40” long-term plan was release on July 30, and I spent a little while reading it (it’s rather thin) and sending comments (both officially and on Twitter). If you’ve been reading this blog lately, you’ll know that I’m a strong supporter of Regional Rail, an initiative from the local advocacy group TransitMatters, and of course part of my comments concerned necessary investments in rail infrastructure:

One of the concerns about Regional Rail is how you replicate the existing peak capacity with electric multiple unit trainsets, given that current diesel-hauled commuter rail has quite long trains and bilevel commuter coaches. The Framingham/Worcester Line currently has peak loads of 1600 passengers, which is way more than a single-level, easy-boarding EMU can hold, but of course you can couple trains together, if the trains and platforms are long enough. The number of seats you can deliver on a Regional Rail line is ultimately determined by two factors: platform length and service pattern. The service pattern is profoundly influenced by storage capacity, since you’re not going to be able (or want) to run peak service all throughout the day. So I asked some people who would know what the current shortest high platform on the MBTA Commuter Rail system is — figuring that, if they built high platforms in the first place, there was probably a constructibility constraint that kept any one station from being as long as the rest. That would then influence my idea of how long a consist of EMU trainsets you could operate in peak service, which would then allow me to search for service patterns that would be able to meet that need.

I learned a few things: first, that the MBTA’s standard high-platform length is 800 feet (as compared to Amtrak’s 1050 feet). Second, that the shortest full-high platform in the system is 400 feet at Malden Center and at Oak Grove — these platforms were originally built for Orange Line service, and when the Orange Line extension to Reading was cancelled, they remained in use as commuter rail platforms. Finally, and more relevant for my particular inquiry, Yawkey Station has high platforms that are less than 650 feet long. Since the MBTA decided to have regular trains stop at Yawkey (it previously only served baseball games) it has become a popular stop, with bus transfers to Longwood and Cambridge, and the Green Line nearby in Kenmore Square; more than 400 people get off there during a typical weekday morning (and not all trains stop at Yawkey). That suggests that a Framingham-Worcester service requiring more than about 650 feet of platform is going to run into operational issues (either the trailing trainset[s] won’t be able to platform, or you’ll have to add short turns or local/express patterns to meet the demand in a shorter train). On a two-track main line like the Framingham-Worcester Line, expresses cause scheduling problems, because there are no passing tracks, but short-turns are OK so long as there’s a yard or siding where the train can reverse without fouling the main line — and this becomes more and more important as frequencies get higher.

In order to think about service patterns, I first had to come up with some reliable passenger counts for the existing commuter rail service. Unfortunately, that is quite tricky: the most recent data available was collected by CTPS in the winter of 2012, which predates, among other things, the opening of the popular new Boston Landing station, and the new super-express trains connecting Worcester (and making quite a mess of the schedule for everyone else). There are a lot of reasons to think that Regional Rail would not need any express services, at least on the Framingham-Worcester line, because it is already so much faster. Based on modeling done by Alon Levy in a study of the North-South Rail Link, with proper high-level platforms at all stations, electrification, and full electric multiple-unit trains — the Regional Rail program in a nutshell — a Boston-Worcester train would take only 61 minutes while making all local stops, including several new or restored infill stops that currently have no service at all (also part of the Regional Rail program). By contrast, the current inbound local trains, like P512, are scheduled to take 94 minutes from Worcester to Boston, the regular express takes 80 minutes, and the superexpress takes 66 minutes — so even a local Regional Rail train would beat the fastest current express train (while serving many more passengers).

For the rest of this post, I’m going to be using the 2012 CTPS data — but keep in mind that the peak 2012 loading on any Framingham-Worcester Line train was 1200 passengers, and now that number has increased to 1600; you may want to add 30% to all of the numbers below. (That said, one of the other numbers I don’t have is total boardings: it’s possible that the schedule clearing required to make the superexpress happen has simply concentrated existing demand in a smaller time window, because people still have to get to work at 9:00.)

The question then comes to how to model travel demand, given the passenger numbers. I’m no transportation planner, I haven’t studied this, I don’t know how they actually model these things, but I came up with a really simple way to do it. First, we assume that all travel demand is for people from outlying towns to get into the city. To a first approximation, this is true: in the current service, very few people get off at any of the intermediate stops in Newton, Wellesley, or Natick. One of the goals of Regional Rail is to change this, and encourage trips like Wellesley Square to West Newton to be made by rail instead of by car, but for service planning purposes we’ll start by looking at what people are actually doing today. We’ll model Boston as a point destination at South Station, the end of the line, although in reality about half the passengers get off at Back Bay and Yawkey; since all train schedules we will model will stop at all Boston stations (including the as-yet unbuilt West Station) we’ll treat these destinations as a unit. This allows us to consider only net boardings (which are nearly equal to boardings at all out-of-Boston stations) and makes the model especially simple.

So this is what I did: consider every boarding on the 2012 service as an indication of travel demand to arrive before the next train’s scheduled arrival at South Station (because otherwise they would take a later train). To be specific, for every boarding, we sample a uniform distribution of desired arrival times over [this train’s scheduled arrival, next train’s scheduled arrival). This model is clearly too simple: the distribution is probably not uniform (it should probably be something like a truncated exponential), and some passengers would undoubtedly choose to arrive earlier than the train they’re currently taking, but can’t because the next earlier train leaves too early. But at least this model, while simplistic, is not crazy stupid. So I wrote a bunch of really bad R code to take as input the CTPS data (manually rekeyed from the PDF into a spreadsheet which was then exported as a CSV file for R to ingest) and then, for each station, generate a vector of simulated desired arrival times. Then, for any given schedule (or at least, in the simple case I solved, for any schedule of 100% local trains with no crossing freights or other track fouling) we can compute the number of boardings to expect at each station, and thereby the cumulative loading at the Boston city line.

Once you know the cumulative load at each station, you can calculate the number of EMU trainsets required to serve that number of passengers, based on your preferred model of EMU. I used the JKOY class Sm5 EMU, used in Helsinki’s commuter network; it’s a broad-gauge version of the Stadler FLIRT, and in the middle of the size options Stadler offers for this product line. (The Sm5 is 75 meters or about 246 feet long, and carriers 232 fixed seats and 28 folding seats including wheelchair bays; I’m rounding up to 250 feet and 250 passengers since any Boston EMU will not be this exact configuration. The diesel FLIRT being constructed for TEXRail is 266 feet long but with a much higher seating density, rated capacity 488 passengers; with the same seating layout you could probably get the same capacity in a 250-foot EMU, but with slower boarding and less convenience for passengers with wheeled bags or mobility aids.) Given these numbers, you could carry 700 passengers in comfort on a three-trainset Sm5-equivalent, or over 900 passengers in less comfort on a two-trainset TEXRail-type train. I started doing this modeling before I found the capacity numbers for TEXRail’s FLIRT3 DMUs, so all of the rest of this analysis is based on the Sm5’s capacity. (That said, if you use my analysis but buy Fort Worth’s seating layout, then you’ve effectively accounted for a more than 50% increase in ridership over the 2012 numbers. My bias, however, would be towards boarding doors and bike/wheelchair/stroller space.)

Now finally to get to the point. I modeled three different scenarios, all with 100% local trains serving all stops between Worcester and Boston, with four, five, and six trains per hour. The 4-tph service pattern requires a minimum of five trains with 3 trainsets (peak loadings 541–643 passengers); the 5-tph service pattern requires a minimum of two 3-trainset trains (peak loadings 515 and 541), but for operational reasons you’d probably run five triples on that service as well. That causes a problem, because overnight storage at Worcester is limited to four 800-foot consists, so you end up having to make up three-trainset consists in Boston and send them out empty before the morning rush to be turned around in Worcester and used to make inbound peak service. The extra peak-load trainsets being used in non-peak (or worse, non-revenue) service significantly reduce the equipment utilization compared to a service pattern that uses no more than a two-trainset consist. For what it’s worth, the Helsinki service that the Sm5 was specified for uses to more than that — but they have sufficient passing tracks to run multiple express patterns in addition to their regular 2-tph local service. For my 6-tph scenario, with 30 trains arriving between 6 and 11 AM, the peak load is 464 passengers, for the 9:00 scheduled arrival, which makes sense — there’s another peak (450) that probably reflects people starting work at 8:30.

Having concluded that 6 tph was the best all-local, all-stops schedule, I went on to actually figure out the equipment and storage required to make this service operate. I assume a minimum 15-minute turnaround times (works out to 19 minutes at South Station), which requires at least two platform tracks (possibly you can make it work with as few as that, but the equipment plan does entail making and breaking consists on the platform, in addition to reversing direction, which means doing additional checks before boarding outbound passengers. Overall, the service requires 26 trainsets, of which 12 can be stored overnight in Worcester (more would be better) and 14 in Boston (at Southampton or Readville). At midday, 10 trainsets would be stored in Boston and 1 on the platform at Worcester, with the remaining 15 either in service or being turned at one of the terminals. (Note that the capacity at Worcester is for four 800-foot consists, and likewise 8 consists at Southampton and 12 consists at Readville. Three 250-foot trainsets of EMUs — or indeed three 266-foot trainsets of Fort Worth DMUs — will fit in 800 feet; whether the slack can be used for anything depends on the layout of the yard tracks being used for storage.)

If you look at the simulated boardings, there are no trains that leave Framingham with more than 270 passengers (one trainset’s worth). This suggests a different service pattern, which I haven’t written the code to model but which I suspect makes a lot of sense: instead of running 6 tph all the way from Worcester, run 3 tph from Worcester interspersed with 3 tph from Framingham. Then, you can store trainsets at one of the three Framingham yards (which would have to be wired to make this work), or even implement my idea to upgrade the Agricultural Branch as far as Crossing Blvd. and build a station there with a tail track to store Framingham-terminating trains. I think if you do this, you actually save a substantial number of trainsets (maybe as many as six), but of course the outer towns don’t get as much service in that scenario, which will cause higher loading on the interior segments of those runs. You’ll also notice that there are three early-morning runs in my schedule that have no boardings shown; there’s every reason to believe that people would use these early-morning trains, but 6:30 was the earliest arrival in the 2012 schedule that CTPS audited, so my model doesn’t predict any demand before then.

The crux of my argument to MassDOT and the MBTA is that, with looming procurement projects to replace the ancient and outdated commuter rail rolling stock, it makes sense to commit to building high platforms everywhere now, so that we can then purchase exclusively high-floor multiple-unit rolling stock like the FLIRT3 (which is assembled in Utah and thus eligible for “Buy America”-restricted funding) — specifically with the goal of acquiring a modular system that uses the same components for both electric and diesel-powered trainsets. We wouldn’t get the full benefits of electrification right away — for one thing, the prime mover in the diesel FLIRT only outputs half the power an EMU can draw from the overhead wire, so it accelerates much more slowly — but we would be ready to switch over as quickly as lines can be electrified, starting with the Providence Line which already is electrified aside from a few terminal and yard tracks. We should say “yes, we are going to do this, and we are not going to blow billions of dollars on obsolete unmotorized passenger cars and diesel locomotives.”

Code and data

All of the code and data used for this model is available in my commuter_rail_simulation repository on GitHub. A printable PDF version of a spreadsheet with the results is available at in the repo as well (direct download).

The contents of the spreadsheet are as follows:

  1. The predicted schedule of a local train leaving Worcester, based on Alon Levy’s modeling but with most of Alon’s infill stations zeroed out. Note that he adds a 7% pad to the scheduled arrival time at each station, which adds up to just over four minutes pad in the arrival time at South Station.
  2. Simulated passenger loading, 4 tph
  3. Simulated passenger loading, 5 tph
  4. Simulated passenger loading, 6 tph; I’ve added columns with terminal departure and arrival times as well as the number of trainsets required for peak loading on each run
  5. An actual equipment schedule for the 6 tph service, based on the simulation
  6. Predicted schedule of an express train leaving Worcester, again based on Alon’s numbers but with the stations the current P504 skips zeroed out; note that it gains 9 minutes over the local, which would cause problems for 6-tph operations.
  7. Predicted schedule of a superexpress train leaving Worcester, adding stops at Framingham and Boston Landing over the current P552 schedule; it gains 15 minutes over the local, which would cause problems for any of the scenarios modeled, and is not worth emulating or even modeling, given that the local under Regional Rail is faster at any time of day than this twice-daily express is today
  8. CTPS passenger loads, from the 2012 study, for Framingham/Worcester trains departing in the morning
  9. Net passenger boarding, from the same study
Posted in Transportation | Tagged | 1 Comment

Long time no bake

Earlier this year, I committed to myself that I would not repeat a recipe that I had previously done, as a way of getting out of what felt like a rut. But since then, I have baked almost nothing, and indeed I haven’t done much complicated cooking either. Part of that is because my dishwasher has been malfunctioning since the winter, and I’ve been lazy about getting it fixed or replaced, but a bigger part of it has been that I’ve been quite frankly getting rather fat. Nothing discourages the creation of tasty baked goods quite like stepping on the scale and being unable to comprehend why the number is so much higher than it was three months (or worse, three years) ago. That said, I do have one new baking experience to share, although I have less to say about it than I usually do.

Expecting bad weather all this week, I decided to make a whole-wheat sandwich bread. I had printed out a recipe by Andrew Janjigian from Cook’s Illustrated‘s March/April 2011 issue (on the web, $); the same recipe was also featured on America’s Test Kitchen TV S12E13, “Soup and Bread from Scratch”. (I’m not sure why I printed this recipe out, since I own every issue of the magazine going back to 2006, but it was handy to have as a reminder — otherwise I would probably have just gotten my usual supermarket whole-wheat bread.)

It was an interesting experience, to say the least. The recipe requires three separate wheat ingredients: bread flour, whole-wheat flour, and wheat germ, and dirties a rather large number of bowls: the bread flour is used to make a traditional pre-ferment, but the whole-wheat flour and wheat germ are combined with milk, kneaded, and allowed to soak overnight — then the whole mass is combined together the following day with large quantities of butter, yeast, and salt to form a very wet, sticky dough, which must be kneaded by machine, proofed, kneaded some more, proofed some more, divided in half, shaped (while still extremely sticky), bench proofed, and finally baked off in a steam-filled oven on top of a baking stone. The online version of the recipe text did not even hint at how wet and sticky the dough would be, although I suspect the television version (which I would have seen in 2012) would have done so. I ended up adding about an extra half-cup of flour just to make the dough manageable — fresh out of the first mixing stage it was practically a batter. With all that fat (there’s also some veg oil), flour, and sugar (honey) it makes a very soft and very high-calorie bread, probably better suited to French toast than the sort of sandwiches I can afford to eat when the weather doesn’t allow for bike commuting. I also found it difficult to divide the dough evenly — should have used my scale! — so I ended up with one loaf pan overflowing and one rather undersized after the bench proof.

Overall? It’s good enough, but just comparing the amount of work required to proof, handle, shape, and clean up after this recipe, it’s not enough better than my own whole-wheat bread recipe to justify the extra effort. (On the other hand, the wheat germ is perishable so I may end up making it again a few times, cutting it in half to reduce the bowl-cleaning effort.) And because this bread is so soft, portion control (without a commercial slicing machine) is a complete nightmare. For this reason I’m not providing nutrition information.

Posted in Food | Tagged , ,

Recipe quick takes: sandwich bread and slow-roasted pork chops

In accordance with my pledge from earlier this year, I made two new recipes recently, a whole-wheat sandwich bread I printed out ages ago from King Arthur Flour, and the “deviled” pork chops from next month’s issue of Cook’s Illustrated.

First the bread. The recipe is entitled “Organic Whole Wheat Sandwich Bread” is one of a number I printed out several years ago (copyright 2007!) when I was developing my own whole-wheat sandwich bread. It’s no longer available on their Web site, so far as I can tell, and my printout doesn’t have a URL I can look up in the Wayback Machine, but the the formula is very similar to one titled “A Smaller 100% Whole Wheat Pain de Mie”, but not baked in a lidded loaf pan, and with more fat. So far as I know, I had never done this recipe before, and it has some good and bad points. On the good side, it’s very soft; the added fat, milk powder, and potato starch all combine to ensure that. On the bad side, it’s very soft, and tears easily when slicing or attempting to spread peanut butter or jam. It’s also quite high-calorie: two thin (½ in or 12 mm) slices add up to 275 kcal (minus a little bit for whatever carbs the yeast ate), compared with similar-sized commercial whole-wheat breads which tip the scale at 220 kcal. On the positive side, with all that carbohydrate it toasts very well, and would probably make a good whole-wheat pain perdu or Texas toast. I probably wouldn’t make it again.

The second is the pork chops. This comes from the “May & June 2018” issue of Cook’s Illustrated (pp. 10–11) and I think it’s the first thing I’ve made from the magazine since Christopher Kimball’s partners fired him as editor-in-chief. I actually didn’t make the magazine version, but rather “Deviled Pork Chops for Two”, an online-only extra based on the four-serving magazine recipe. This was quite simple to do, as it merely involves toasting some panko in melted butter, making a flavorful seasoning paste, and using the latter to glue the former to some pork chops. I found while doing this that I had mistakenly defrosted a pair of strip steaks rather than pork loin chops I thought I had, but luckily, my quarterly meat delivery had brought me some pork sirloin chops that I could speed-defrost in the microwave, and this recipe calls for the sort of low-and-slow cooking that pork sirloin requires. (Unlike the loin, pork “sirloin” is composed of a few different muscles, and does not respond well to fast, high-heat cooking methods like sautéeing.) The recipe is simple enough that I did not bother to enter it as a “recipe” in my nutrition app; I just recorded the pork, mustard, panko, and butter (the four highest-calorie ingredients) individually. Recommended.

Posted in Food | Tagged , , , ,

Post-vacation status update

Last year around this time I went to the World Figure Skating Championships in Helsinki and generated a whole bunch of posts about it. I went to Worlds again this year, in Milan, and bookended that trip with train travel in Switzerland. This time, I was accompanied by my parents (who, now both retired and with the big house sold, have more freedom to travel) — which worked out reasonably well, but meant that I wasn’t burning the candle at both ends and couldn’t slam through the photo editing to get some blog posts out during the actual competition. (In all honesty I would rather have been accompanied by one person, for the whole length of the trip, but since that person has yet to be identified, the ‘rents will do, and having someone else worry about the arrangements in Italy made the whole trip a bit less stressful.) The arena in Milan was somewhat inconveniently located relative to our lodging, a short-term private rental apartment, and my impression overall is that Helsinki 2017 was far better organized in addition to being more conveniently located and having better transit access. I finally got back home late on Tuesday evening, and I’ve been spending the last few days digging out from the accumulated backlog resulting from a 13-day vacation. (The folks being retired, they got together with my mother’s older sister and her husband, and are spending an extra week in Italy.)

The practical upshot of this is that I still have about 6,000 photos to edit down somehow into something more like 250, and this process will take a while — starting with working through my shooting logs and hopefully correctly identifying all the skaters this time — but over the month of April you should see photos appearing both here and at Wikimedia Commons where appropriate. Anyone interested in accompanying me to other international sporting events is welcome to apply. ;-)

PS: I’m already planning on not going to the 2019 Worlds in Saitama, but 2020 will be in Montreal, which is well within driving distance for me. Last year I also went to Worldcon, again in Helsinki, but this year’s Worldcon is in San Jose and I’m inclined to skip it as well. (Worldcon 2019 will be in Dublin and Diane Duane is GoH and I’ve already bought a full membership; 2020 isn’t decided yet but will likely be in New Zealand.) Other sporting events I’m considering, besides the usual baseball and hockey arenas, are the IBSF World Cup stop in Lake Placid, maybe some ISU Grand Prix skating events, and perhaps the 2020 Winter Youth Olympic Games in Lausanne — all of this is very speculative right now and might not come to anything.

Posted in Administrivia, sports, travel | Tagged ,

My comments on passenger rail infrastructure to MassDOT

The comment period for the State Rail Plan ends on Friday, the advocacy group TransitMatters just released their report on Regional Rail, and MassDOT is currently in the process of two separate planning exercises related to the MBTA and the commuter rail system in particular, in preparation for the next tender for the commuter rail operations contract (currently held by the French firm Keolis). Yesterday evening I sent in my own comments, inspired by the State Rail Plan deadline, but most of what I had to say was outside the State Rail Plan’s scope, so I also sent it to the people responsible for the MBTA planning process. Here’s what I said, edited slightly for formatting.


I was originally going to send this in regard to the State Rail Plan, since the public comment window was recently extended, but on closer review it seems that most of my comments are more usefully directed at the MBTA-specific planning process, since I live in the MBTA district. However, my points 1 and 2 below are intended to reference all passenger rail corridors in the state, not just the MBTA service area, and in particular the Commonwealth should explore opportunities for cooperation with neighboring states and with Amtrak to investigate the application of these principles to the Connecticut River Line and to future Boston-Springfield intercity passenger service.

Unexpectedly, much of what I have to say has been preempted by the release of a report by the advocacy group TransitMatters, which you will have seen already (for the record, “Regional Rail for Metropolitan Boston“, is the report to which I refer). However, I will make some additional comments on subjects that are not addressed in the TransitMatters report.

I have lived in Framingham for 17 years, and for that entire time, I have commuted in a single-occupancy vehicle on the Massachusetts Turnpike to my job in Cambridge. I would prefer to have an alternative that does not involve driving, but the current MBTA commuter rail service is infrequent, slow, unreliable, and more expensive per marginal trip than my commute. During the summer months I will bicycle to work (on approximately 40 good-weather weekdays between May and September); a better commuter rail service with real provision for bicycle users (not limited to off-peak hours) would substantially increase the number of days a bike commute is practical by enabling bike+train round trips.

To put more precise numbers on it, I pay (employer-subsidized) $10 a day to park in Cambridge, and my shoulder-hours SOV commute (10:25 AM and 7:15 PM) takes approximately 35 minutes parking space to parking space. The current MBTA Framingham/Worcester Line service has a long gap in service after 9:30 AM that makes it impractical for my schedule, but even if I shifted my schedule earlier to take train #512 inbound, the actual time cost of the MBTA service (with the necessity of driving to the Framingham station, finding and paying for parking, the train ride to Boston, the transfer penalty, the subway or bus trip to Cambridge, and then walking to my office) would be well more than double my current car commute. (My bicycle commute, 20.8 miles via two different routes, takes approximately 85 minutes, or about as long as the current commuter rail service, at an average speed of 15 mph — but with much greater health benefits.)

I would be willing to consider commuter rail — indeed, I would strongly prefer it — but for the excessive travel time (which is of course compounded by the system’s current widely reported unreliability). A reliable travel time of not more than 70±10 minutes would be easily within consideration, and with properly optimized schedules and full construction of West Station would make it highly attractive for many commuters from the Metro-West area who currently drive to jobs in Cambridge or Boston. I have heard anecdotally that the Commonwealth currently considers demand for access from Metro-West to jobs in Cambridge negligible to the point of not being worth studying, and I would strongly encourage the planning staff to consider this commuting pattern more seriously, as rising housing costs have made living closer to work impractical for many people who would prefer a transit option.

In the spring of 2017, I took a vacation in Helsinki, Finland, where I had occasion to use the rail system there extensively. The rail network around Helsinki, like Boston, is based on a stub-end terminal station (they only have one, unlike Boston’s two, and it’s correspondingly larger in terms of footprint than is possible in congested downtown Boston). However, Helsinki’s regional transport administration, HSL, has implemented an urban and inter-suburban rail network in the “regional rail” style described by the TransitMatters report, with full fare integration and high frequencies, connecting Helsinki Central Station with both historic suburban and exurban downtowns and new neighborhoods of transit-oriented development. HSL also maintains fare integration with intercity passenger rail services that serve nearby metropolitan areas outside the HSL district, so riders within the region can take a suburban train or an unreserved regional train, whichever is more convenient — this should be a model for intercity passenger service in Massachusetts along corridors such as Boston-Worcester-Springfield, which might be operated by a different agency or contractor than the MBTA.

Metropolitan Helsinki has about a third the population of the Boston MSA and is also slightly less dense; it has only one heavy rail subway line, and for surface transit has only street-running tramways, ferries, and private-tender bus services. The population of the whole of Finland is about that of the Boston MSA and is smaller than the Boston-Providence CSA, and Finland has quite high levels of suburban development and car ownership relative to other European countries. Yet Helsinki sustains substantial investment and substantial ridership in its fast, frequent, reliable, and affordable commuter rail system. I wrote a series of blog posts about it when I returned from my trip, which you can refer to here (fares and network structure) and here (suburban rail). Note that the services described in both of those articles have been realigned and in a few cases significantly expanded since I wrote those posts last April.

My specific recommendations, which are generally in accord with those in the TransitMatters report:

  1. The Commonwealth should adopt as a matter of policy a preference for electrification and high-level platforms on all rail routes currently served or contemplated to be served by passenger trains. In some cases this may require additional state investment to maintain compatibility with freight services.
  2. All projects and studies inconsistent with point (1) should be terminated.
  3. In the Boston region, a priority should be placed on electrification of the South Side commuter rail, improving operating costs, schedule reliability, and environmental justice. As funding becomes available, investment should shift to the North Side lines, which will require more infrastructure to be constructed.
  4. Where possible, labor agreements should be sought that limit excess staffing in exchange for acceleration and simultaneous construction of projects along multiple lines, maximizing useful employment of skilled trades.
  5. As TransitMatters notes, the electrification of the Providence Line is nearly complete and should proceed forthwith, as should electrification of the Stoughton and Fairmount Lines, with the existing diesel locomotives and rolling stock shifted to reduce maintenance pressures on other lines.
  6. Although the North-South Rail Link would significantly improve regional connectivity and the overall utility of the rail network, construction of NSRL is by no means a prerequisite to implementing electrification, high-level platforms, and frequent all-day schedules, and these should proceed at the highest priority, given the current capital expenditures which would otherwise be required even to preserve the existing diesel infrastructure, whether or not a funding mechanism for NSRL can be identified.
  7. The Commonwealth should in particular be prepared to self-fund the entire acquisition cost of electric-multiple-unit trainsets in order to buy global best-of-class equipment at competitive market prices, unless the federal government commits to waiving Buy American provisions. Federal capital funding, if available, could still be pursued for track, platform, station accessibility, overhead wire, and substation construction.
  8. Full build of West Station and development of connecting routes to Cambridge (whether bus, light rail, or a shuttle via the Grand Junction branch) and Longwood Medical Area should be accelerated relative to current plans.
  9. With respect to Framingham in particular, in order to support high frequency service between Framingham and Boston it will probably be necessary to have some trains turn or lay over at Framingham. The Commonwealth should study, in conjunction with the City of Framingham and MWRTA, the potential benefits of exending trains along the Agricultural Branch to Framingham State University and possibly to the office-industrial park area at Route 9 and Crossing Blvd. where there is already a park-and-ride lot and numerous employers that could be served by a reverse-commute service.

You can see more related content in this blog’s category “Transportation” (links below or to the right depending on your screen layout).

Posted in Transportation | Tagged , | 1 Comment

What’s Wrong with Metcalfe’s Law?

In a recent Medium post derived from a talk he gave at private invitation-only event for the IT industry, Dan Hon presents one view of Metcalfe’s Law, the theory espoused by Ethernet inventor Bob Metcalfe that “the value of a telecommunications system increases as the square of the number of participants”. Hon looks at the (no pun intended) value judgment embedded in talking about the “value” of a network, and considers purely market-oriented measures lacking.

I’d like to step back a bit and look at it from a different angle. Instead of “value”, let’s consider “utility”: what benefit arises to the users from their use of the network? Metcalfe’s claim can be restated simply: the global utility (sum over all users) of a network is quadratic in the number of users. You don’t even need graph theory to prove that this is trivially true, if you accept what I take to be Metcalfe’s presuppositions: first, that utility sums linearly over all users (a view which would be understandable to Jeremy Bentham), and second, that each user’s utility is linear in the number of other users on the network.

The real problem with Metcalfe’s Law, as I see it, is precisely in this second presupposition. While it is true to a first approximation, for small networks, once the network reaches a sufficient penetration of that community with which any individual user has an interest in communicating, the marginal utility of additional communications partners diminishes quite rapidly, and ultimately goes negative. We see this even with old technologies like the telephone network: nearly all of the value I get from the telephone derives from being able to communicate with family, friends, and current and potential employers, vendors, and service providers in my immediate vicinity. While connecting a billion people in India or China to the rest of the world is laudable, there cannot be more than a thousand of them that make the telephone network more valuable to me. (One thing that this analysis does not consider, and a more sophisticated analysis would, is economies of scale: do those billions of users actually make it easier or cheaper to provide me with the service that I value. To be left for another day.)

In the social network case, it’s clear how additional users can have negative marginal utility: the additional noise generated can drown out the intended communication (whether that noise is trolls, pile-ons, or just way too many well-meaning people making the same comment in a reply). Twitter is a great demonstration of this; users bearing the vaunted “blue checkmark” — a distinction given out entirely at Twitter’s discretion to a small subset of users, mostly celebrities, journalists, government officials, and corporate marketing departments — are given a variety of tools to screen out communications from the masses. One of the tools which is frequently employed by these “verified” users screens out all notifications from the remaining users, allowing them to give the appearance of using the platform to communicate with others while in actuality paying attention only to a small number of similarly privileged people. This screening was not part of the original Twitter service: it was only deployed after Twitter gained a sufficiently large and noisy user community that it was driving away users Twitter actually had a business reason to want to retain. Of course, even “old tech” had to come up with similar mechanisms: when telephone calls became cheap enough that scammers were willing to spam a thousand people at dinnertime in the hope of finding a single mark, caller ID became a necessity and more and more people began to screen their calls. (Compare also the Eternal September.)

In conclusion: Metcalfe’s Law is wrong because the marginal utility to the existing users of a communications network is not constant: while it is large and positive for small networks, as networks grow beyond the scale of normal human social circles, the utility drops off quite rapidly, and eventually goes negative. When you sum up this function over all users, unlike the linear utility posited by Metcalfe, overall value does not scale as the square of the number of users. (It might not even be asymptotically linear — I leave that analysis to someone with better mathematical chops.)

Posted in Computing, Law & Society | Tagged

Clarifying one particular gender conversation

This post has been percolating in my head since the Worldcon 75 in Helsinki last August. My initial idea was quite a bit more ambitious — I have a note here which reads “Gender: cause or effect?” — but what was going to be the introductory section is probably the only part of it that I have something reasonable to write about.

Gender was, not surprisingly, an important thread in the conversation at Worldcon 75. There was even a panel (which I didn’t manage to attend) talking about how you deal with it in languages that don’t “have” gender, like Finnish, Turkish, and Chinese. But that made me want to write a little bit to try to clarify this discussion, because I think the words we use to talk about this particular aspect often make things more confusing rather than less. So this post is going to explore two questions: What do we mean when we say “gender” in the context of language, and what does it mean to say that a language “has” or “doesn’t have” it?

I should point out that I am coming at this from the perspective of an interested amateur, not a professional linguist by any means — but an amateur who has at least had the experience of trying to learn both French and Finnish. So don’t take any of what follows as gospel, but rather, a jumping-off point for further research if you’re interested in that sort of thing.

So what is “gender”, anyway? In linguistics, “gender” is a specialized form of what is more generally called “noun classification” — it’s just a historic fact that some languages (most but not all of them) divide their nouns up into categories. We generally reserve the term “gender” to refer specifically to those noun-classification systems that align more or less with the binary (masculine-feminine) or ternary (masculine-feminine-neuter) systems seen in Indo-European and Semitic languages (like English, Greek, and Hebrew), as opposed to those with a larger number of categories like the Bantu languages of Africa. It’s important to distinguish gender as a grammatical category from gender as a semantic category: because the “gender binary” is a near-universal part of human experience, all languages have words with semantic gender, words like “man”, “woman”, “father”, “daughter”, and so on (although not all languages have the same set — French distinguishes between male and female cousins, for example, whereas English does not). But even in languages with very strong grammatical gender, it’s by no means given that this will align with the semantic gender — as witness German, where many words that are semantically female (or at least feminine) are grammatically neuter or sometimes even masculine. (Historical linguists tell us that this is because the Indo-European three-gender system had collapsed to two genders before Germanic languages re-developed the modern neuter.)

So what then does it mean to say that a language “has” or “doesn’t have” grammatical gender (or indeed noun classification)? Grammar, roughly speaking, is how words fit together to form phrases and other multi-word structures, and also about how words refer to other words in context. For gender to be part of a language’s grammar, it must have some observable consequence on which words, or word forms, are allowed together in a sentence, or can be used to refer to the same thing. The most relevant property to look at is what English teachers usually call “agreement”, and linguists often call “concord”: the property that words that refer to the same thing must all come from the same class or be otherwise marked in the same way. English makes these considerations much less clear, because English has only the fractured remnants of its historic three-gender system, observable only in pronoun agreement, and not universally even then. But the Romance languages — those descended from Latin, like French, Spanish, and Romanian — all have a robust two-gender system (masculine and feminine) with mandatory concord for pronouns, determiners, adjectives, and participles. Semitic languages go one better: verbs agree in gender with their subjects. Unlike in English, Romance languages have gendered third-person plural pronouns: a group of portes (doors) in French are elles, but a group of stylos (pens), or indeed a group of mixed-gender objects, are ils.

Because of how English historically developed, acquiring pronouns from Old Norse and losing most of its inflectional system as England was invaded alternately from the north and from the south, we have no gender agreement for adjectives or articles any more (except, for a very few writers and the editors of The New Yorker, a very small set of adjectives borrowed from French: naïf/naïve, blond/blonde, brunet/brunette being the principal ones). English does continue to have two forms of gender concord for pronouns: the third-person singular he/she/it, which do not precisely correspond with the historic genders used in Old English or West Germanic, and a simple sentient/non-sentient system seen in the interrogative pronouns who/what and the relative pronouns who/which. (I don’t include “singular they” here because it acts grammatically identical to the third-person plural in all other respects — compare the much earlier “singular you”, which also takes a plural verb form.)

So what about those putative “genderless” languages? The only one that I have any direct knowledge of is Finnish, but I understand that all of the Uralic languages are the same in the most important way: there is no gender concord for adjectives or participles. These languages have no articles, so there is nothing to agree with there. But with pronouns it gets a bit more interesting. Finnish arguably has a two-gender system for personal pronouns: the sentient hän (singular)/he (plural), and the non-sentient se/ne. But (and it’s a big “but”), in regular spoken conversational Finnish (as opposed to newscaster or teach-to-confused-foreign-teenagers Finnish) these two categories are collapsed — to the “non-sentient” se/ne. I never learned the language well enough to express complex structures, but I suspect that there may be similar behavior in some of the relative pronouns. Away from Finnish, I know that there are languages that don’t have pronouns at all, but I don’t know how that set intersects with other means of marking gender or noun classification.

Another interesting part of this conversation here, albeit one that I’m not all that well prepared to discuss, is the question of languages with mandatory gender marking for names. As English users, we are accustomed to the idea that a personal name is just an arbitrary user-chosen token, and might at least in theory refer to any gender. Indeed, numerous names are gender-neutral or have, in living memory, actually changed their default gender. (“Robin” is perhaps the poster child here: previously a diminutive form of “Robert”, today most Robins are female and not a diminutive for anything.) That said, we are still familiar with gender-marked names, whether it’s “Alexander”/”Alexandra” (sharing the gender-neutral hypocoristics “Alex” and “Sandy”!) or “Robert”/”Roberta”. Numerous other pairs of names exist in the repertoire used by English speakers to name their children and their fictional characters. Some other cultures take this to an extreme, however: most or all names in Slavic languages, for example, are gender-marked — both given and family names, not to mention patronymics. Similarly, the patro/matronymics used in Icelandic names have mandatory gender marking, because (as with the Slavic patronymics) they contain an element that means either “son” or “daughter”, regardless of whether they use the father’s or mother’s name as the base.

One of the reasons this specifically came up at Worldcon 75, aside from the panel that I mentioned, is that because Finnish doesn’t have masculine or feminine nouns or pronouns, Finns sometimes have difficulty remembering the correct forms to use when speaking in English or other languages that do make such a distinction. This doesn’t mean that they are confused about the semantic gender of people (they can certainly distinguish miehet and naiset, after all), but rather, that the association of semantic and grammatical gender is weaker when speaking in a second (or third) language when one’s ambient tongue doesn’t make the same distinction. The Worldcon program included a note explaining this and asking attendees to be understanding if their hosts chose the wrong pronoun. (Which is perhaps the best case of all for those badge flags given out at cons indicating the holder’s desired pronoun.) It’s especially an issue for invented or nonce pronouns: it’s probably unreasonable to expect anyone other than the in-group of native speakers who adopted them to actually use or even make sense of them. Those who use such pronouns should take care to check their privilege (as speakers of a global hegemonic language) when dealing with non-native speakers, especially those whose native pronoun system doesn’t correspond to the English one.

Posted in Language | Tagged , ,

Overdue recipe report: Luisa Weiss’s Christbrot

This gallery contains 11 photos.

This is the third of three recipes I did for the holiday season from Luisa Weiss’s Classic German Baking (Ten Speed Press, 2016). Weiss recounts how she felt that she had to include a recipe for Dresdner Christstollen, the classic … Continue reading

Gallery | Tagged , , , ,

Overdue recipe report: Luisa Weiss’s Mohntorte

This gallery contains 12 photos.

Technically, mohntorte — a wheat-free sponge-type poppy seed cake — is not a Christmas specialty, but I included it along with the two actual Christmas recipes from Luisa Weiss’s Classic German Baking (Ten Speed Press, 2016; pp. 126–7) that I prepared … Continue reading

Gallery | Tagged , , , , , ,

Overdue recipe report: Luisa Weiss’s Pfeffernüsse

I decided that my holiday baking this year would be three recipes from Luisa Weiss’s Classic German Baking (Ten Speed Press, 2016) — after all, many of our holiday traditions, and a good portion of American baking more generally, come from German immigrants, which accounts in part for the wide differences between US and UK Christmas food traditions. (And, of course, my family name also comes from German immigrants!) Among the most classic German-American traditions at Christmas are pfeffernüsse, small spice cookies (often confused in appearance, but not flavor, with Russian/Italian/Mexican tea cakes/wedding cookies) flavored with ground pepper and gingerbread spice (the latter is called lebkuchengewürz in German and is a common prepared mixture of ground spices used in many recipes). I remember my father making cookies like these when I was little, probably using the recipe from the 1976 edition of Joy of Cooking. Rombauer’s Joy recipe is quite different from the recipe in Weiss’s book (pp. 228–30), and I suspect it reflects a more “Americanized” version. Among the most notable differences, Rombauer uses molasses, black pepper, baking powder, brandy, and dry confectioner’s sugar where Weiss uses honey, white pepper, ammonium carbonate (“baker’s ammonia”), rum, and a lemon-sugar glaze, respectively. The most significant difference is that the Joy recipe contains butter whereas the only fat in Weiss’s recipe comes from the egg; Joy also uses candied citron and chopped almonds, flavoring ingredients absent from the Weiss’s pfeffernüsse. (By the way, in this post I will use English capitalization since I’m writing in English, but I will follow the German morphology for number, so it’s one pfeffernuss and two pfeffernüsse.) Weiss says that her recipe comes from Offenbach — like many traditional European recipes, there is a lot of regional variation — and it’s entirely possible that Rombauer’s recipe is equally authentic but from a different part of Germany.

Before starting to make the pfeffernüsse, it was necessary to gather the requisite ingredients for the lebkuchengewürz — since, unlike in Germany, it’s not something sold in grocery stores. The recipe is quite simple, and makes far more than is required for one or even two batches of pfeffernüsse: 30 g of ground cinnamon (I used true cinnamon and not cassia), 1½ tbl of ground cloves, 1 tsp of ground allspice, 1 tsp ground cardamom, 1 tsp ground ginger, 1 tsp ground mace (I’ve never seen a recipe use that much mace), and ¾ tsp ground aniseed. That last ingredient was a bit of a challenge to find — it’s one of those spices that isn’t stocked in my local Whole Foods, but luckily the conventional supermarket nearby does have it (it still took a bit of searching the shelves) so I didn’t have to mail-order. All these spices are simply mixed well and sealed up in a container.

Mise en place
Now for the pfeffernüsse proper, starting with the usual mise en place. Starting from bottom left, you can see the lebkuchengewürz, of which this recipe uses only two teaspoons. Next to that is ⅛ tsp of freshly ground white pepper, and then a teaspoon of ammonium carbonate (next to the bottle). The dry ingredients — 310 g flour, 1 tsp cocoa powder, and ¼ tsp salt, have already been mixed together. In the saucepan at upper right is 160 g of honey and 75 g of sugar, and hiding behind the pan is a single egg. There should have been 1 tablespoon of slightly warmed rum in that bowl in the center, but I mistakenly used my half-tablespoon measure instead. Oops.

Honey-sugar syrup
The recipe begins with heating the honey-sugar mixture on the stovetop until it’s hot enough to fully dissolve the sugar. While the mixture is still hot, the spices are stirred into it (this helps to “bloom” the spices and bring out their volatile flavors). Meanwhile, the ammonium carbonate is mixed with the rum, causing it to fizz a bit, and hopefully dissolve. (I found that even when I used the correct amount of rum, it wasn’t quite enough to fully dissolve the ammonium carbonate.) It will fizz quite a bit, losing some of the leavening power in the process (since there’s nothing to capture the gas at this point). Ammonium carbonate — (NH4)2CO3 — was once a popular leavening agent, since it decomposes entirely into gas (ammonia, carbon dioxide, and water) leaving nothing behind, unlike baking powder, which adds sodium or other metal ions like aluminum depending on the specific chemistry. The downside of ammonium carbonate is that half of the leavening power comes from gaseous ammonia, which is not a scent you want to have in your food, so it’s only usable in applications where you can be sure that all the ammonia will be driven off in the baking process.

After adding ammonium carbonate to hot syrup
When the rum-leavening mixture is added to the still-warm honey-sugar-spice mixture, a great deal more gas is evolved, giving the baker a strong whiff of ammonia. After stirring to fully incorporate, the resulting mixture is stirred or kneaded into the dry ingredients along with the egg to form a sticky dough.

Dough mostly combined
At this point, the dough is formed into small balls and deposited on parchment-lined baking sheets. Weiss calls for one-inch (25 mm) dough balls, for a yield of 48 cookies; when I used a #100 (⅝ oz) disher, I only got 31 — I was so surprised by this that I actually did the entire recipe a second time, and had the exact same yield. Because there’s no fat in this recipe, the cookies do not spread in baking, and they can be spaced quite close — I had no trouble putting 20 dough balls on a standard cookie sheet, and if you manage to get the 48 that Weiss calls for, there’s no reason you couldn’t fit them all on two baking sheets.

Fully baked dough balls
The pfeffernüsse are baked in a 375°F (190°C) oven for ten minutes — since there’s very little moisture in them to start with, it doesn’t take long, and you definitely don’t want them to completely dry out. While the first batch is baking, a simple lemon-sugar glaze is made from 100 g confectioner’s sugar, 1 tbl fresh lemon juice, and 1 tbl water.

Cookies after brushing with lemon glaze
Using a pastry brush, the glaze is applied to the the cookies while they are still hot. Just to keep the kitchen clean it helps to put the cooling rack over a sheet pan to catch the excess glaze.

Glazed cookies cooling and draining
You should probably do a better job than I did of evenly covering the surface of the cookies. Once cool, the pfeffernüsse should be sealed in an airtight for a minimum of several days to ripen — as the moisture from the glaze is absorbed by the cookies proper, the dry, crisp crumb becomes soft and chewy.

As I mentioned above, I ended up making two batches of these cookies, which left me with enough to half-fill a Christmas cookie tin that I already had. I had enough (about 60) to eat quite a few of them myself, give a lot away at the family Christmas party, give more away at Christmas dinner, and still bring a dozen into the office after Christmas where almost nobody was around to keep me from eating the rest.

Nutrition

The nutrition details below reflect my yield for this recipe and not Weiss’s. If you can actually get four dozen pfeffernüsse, you’ll have to cut these numbers by about a third.

Nutrition Facts
Serving size: 3 cookies
Servings per recipe: 10
Amount per serving
Calories 240 Calories from fat 4
% Daily Value
Total Fat <1​g 1%
 Saturated Fat <1​g 1%
Trans Fat 0​g
Cholesterol 19​mg 6%
Sodium 69​mg 3%
Total Carbohydrate 55​g 18%
 Dietary fiber 1​g 5%
 Sugars 30​g
Proteins 3​g 6%
Vitamin A 1%
Vitamin C 1%
Calcium 0%
Iron 3%
Posted in Food | Tagged , , ,