World Cup Women’s Bobsled, November 9 at Lake Placid

This gallery contains 112 photos.

The BMW IBSF World Cup tour may have moved on from Lake Placid to Park City — and indeed from Park City to Whistler, B.C., by the time you see this — but I still have photos from Lake Placid … Continue reading

Gallery | Tagged , ,

World Cup Women’s Skeleton: medals ceremony

This gallery contains 18 photos.

Bet you didn’t think I could get this many blog posts about a single race, did you? I wasn’t expecting this one either, but you can thank some schoolchildren. On the official, published schedule, there is supposed to be a … Continue reading

Gallery | Tagged , , | 1 Comment

World Cup Women’s Skeleton run 2

This gallery contains 77 photos.

As seen in the previous post, I started out run 2 of the BMW IBSF World Cup women’s skeleton at the start of the track. After the second competitor left the start area, I walked down the track to get … Continue reading

Gallery | Tagged , ,

Slideshow: skeleton starts

This gallery contains 60 photos.

The regular gallery format isn’t great for showing sports action photos, but I wanted to show what a skeleton start looks like. These photos were taken during the second heat of the BMW IBSF World Cup women’s skeleton race at … Continue reading

Gallery | Tagged , ,

World Cup Women’s Skeleton run 1

This gallery contains 46 photos.

These photos were taken during the first heat of the women’s skeleton race at the BMW IBSF World Cup at Lake Placid on November 9, 2017. All except the first were taken from the spectator side of the finish area; … Continue reading

Gallery | Tagged , ,

Trip report: BMW IBSF World Cup at Lake Placid

Here’s a different kind of trip report. On Thursday, November 9, I attended the BMW IBSF Bob + Skeleton World Cup races at the Olympic Sports Center in Lake Placid, New York, site of the 1932 and 1980 Winter Olympics. The conditions were quite chilly on Thursday, but very good for racing on an artificial ice track — according to the TV commentary I watched after the event, it was an amazing turn-around from the previous weekend when late-season rain washed out the track. I bought tickets for both Thursday and Friday (an excellent deal at $28 total) but when Friday’s weather turned out windy with highs in the low teens Fahrenheit (around −8°C), I bailed on the second day of competition and drove back home. I did see the events I was most interested in seeing, the women’s skeleton and bobsled, as well as the first race of men’s two-man bob. (Because of the track conditions, day 2’s four-man bobsled race was swapped with next week’s scheduled two-man race in Park City, Utah, so there were two two-man races in Lake Placid and two four-man races in Park City. The first race on Friday morning, which I would have gone to see had the weather been a bit nicer, was men’s skeleton.)

There are only four sanctioned bobsled tracks in North America, at each of the last four Winter Olympic venues: Lake Placid and Park City in the U.S. and Calgary and Whistler in Canada. Lake Placid is the site of the U.S. Olympic Training Center, and also the only track within reasonable driving distance of my home outside Boston — but the Lake Placid Combined Track, which is the facility currently in use, was built after the 1980 Winter Olympics. (The old track, known as the “1932–1980 track”, is no longer used for competition.) It’s called the “Combined Track” because, unlike the previous track, it’s approved for all three of the Olympic sliding sports: bobsled, skeleton, and luge. (Skeleton was not on the Olympic program in 1980 when the old track was in use.) The Combined Track is kept open for as long as weather conditions allow, which gives many teams and the general public an opportunity to use it after the competitive season is over, meaning that many non-U.S. athletes have experience training at Lake Placid. (And I’m sorely tempted to sign up for the “Skeleton Experience”, which for a modest fee gives ordinary people the opportunity to slide a skeleton sled down the track from Start 4, which is about halfway up the track, when the track is not otherwise in use for training or competitions.)

The sliding sports are biggest in the German-speaking countries, Latvia, and Russia, and nearly all of the on-track advertising erected for the international broadcasts is from German and Russian advertisers, notably the title sponsor BMW, whose logo is seen on all bobsleds, helmets, and skeletoners’ race bibs. Teams also have a variety of national sponsors, and some individual athletes have their own sponsors, especially those from small countries. (The track announcer said that one British bobsleigh driver was told by her federation that she had to raise £30,000 to compete this year — which she raised from individual donors on GoFundMe!) The differences in national support also mean that some teams travel with few or no support personnel; when I post the photos of the bobsled events, you’ll see how some of the poorer teams have nobody at the race finish area to help the athletes pull their sleds off the track.

As I mentioned, there are three sliding sports: bobsled (“bobsleigh” in the Commonwealth countries), skeleton, and luge. Bobsled and skeleton are governed by the International Bobsleigh and Skeleton Federation (IBSF); luge has a different governing body, the Fédération Internationale de Luge (FIL), and thus a separate competition schedule; I won’t have much more to say about it, and don’t plan to go back to Lake Placid for the luge competition in December. The IBSF is one of a number of sports federations that in recent years has changed its name from French to English; while still based on Lausanne, Switzerland, it used to be called the Fédération Internationale de Bobsleigh et Tobogganing (FIBT), and the change is recent enough that there is still ephemera in circulation with the old name. The IBSF manages four international skeleton circuits, of which the World Cup is the top level, followed by the Inter-Continental Cup, and at the lowest level, the Europa Cup and the North American Cup. Bobsled has three circuits; as a team sport, it requires more competitors from each national program, so there is no intermediate level corresponding to the ICC.

Because 2018 is an Olympic year, the IBSF schedule is compressed, with all of the World Cup events completed before the PyeongChang Olympics begin in mid-February. Like most other top-level international winter sports circuits, the IBSF World Cup awards points to each competitor (skeleton athlete or bobsled driver) on the basis of their standing in each race, with an overall season points winner awarded a “crystal globe”. In a pre-Olympic season, the World Cup points are also used to determine which national teams get to compete at the Olympics, and how many competitors they can enter in each race; many national teams will also use the points total to determine which of their athletes will compete in the Olympics. (In regular years, the World Cup standings also determine the number of entries each national team gets in the following year’s World Cup season, and the position in the starting-order drawing for each individual athlete or driver.) This means that a number of national teams that normally can’t afford to bring their athletes to the North American stops on the World Cup are doing so this year in the hope of earning a place in the Olympics, so there were more competitors than usual in the first heat of each race. (Both bob and skeleton are normally two-heat races, with start order in heat 2 the reverse of the heat 1 times and the winner determined by total time; only the top 20 finishers in heat 1 make the cut for heat 2 and have the chance to earn World Cup points.)

Lake Placid is a really small town — technically, a “village” in New York law — and 1980 was the last time the Winter Olympics were ever (or will ever be) held in so small a community. While Lake Placid still has all of the sports facilities required to host an Olympics, it is far behind modern requirements for communications and transportation infrastructure, not to mention accommodations for athletes and media, and Adirondack Park is subject to strict limits on development. Lake Placid is a fine place for a competition at this level, however, even if it means that the athletes are staying in the same budget hotels as the spectators. (The Austrian team was in the Quality Inn where I stayed, and not only did I run into them at breakfast, but I saw their technicians working on their sleds in a room in the basement next to the hotel’s laundry. I was told by the hotel manager that the Austrian team comes every year for World Cup, but they also had two smaller teams staying this year.)

I’ll have more to say about the individual races when I post my photos of them. In the mean time, I’ll conclude this part of my report with some pictures of the Lake Placid Combined Track itself. According to the TV announcers, the track saw significant work over the summer of 2017, including improvements to the refrigeration system and construction of a new shelter at the finish area; it also appears that the facility roadways have been improved by adding a new paved road connecting the management offices, finish, and start areas; spectators desiring to climb to the start area do so via the old road, which they share with track maintenance workers and a few athletes warming up. (In general, I was quite surprised by how close athletes and spectators get at this event; you’ll see more of this in later posts.)

Facility offices
Track substructure
Refrigerant pipes
Refrigerant tank

In the photo below, looking down from the end of the run-out towards the finish area, you can see the new shelter (white metal structure). Beyond that is where the track makes a big loop (also painted white). Off in the distance is the visitor parking lot.
New shelter over finish area

Bobsleds and skeleton sleds come into the finish area with an enormous amount of momentum and require a very long run-out after the finish line to come to a stop. In the next two photos, you can see how far uphill the sleds run before stopping, after crossing the finish line at more than 100 km/h. In the second photo, I’m looking very far down towards the bobsled finish line — the skeleton finish is even lower down the hill, if I’m interpreting the TV pictures correctly. Sleds that run into trouble don’t generally don’t make it all the way up the run-out to the finish area; the building on the left beyond the parked cars is just above the bobsled finish.
Looking up the run-out
Looking back down toward the finish line

Now up by the start house, which is divided into “VIP” and “plebe” sides — the VIPs mix with the athletes and race officials on the right-hand side of the track, facing downward, and the rest of us are on the left side. (However, with the reduced crowds on frigid Friday, I saw from the TV broadcast that the Russian women were standing on the “plebe” side to cheer on their male counterparts who were racing that day, so I’m making it sound more classist than it really is.)
Start area

There are temporary video monitors set up at the start and finish areas so that competitors, coaches, race officials, and spectators can watch (but not hear) the TV broadcast. There’s a separate on-track commentary from a local announcer which is broadcast from speakers mounted all around the track; the local announcer also provides schedule information for competitors, spectators, and maintenance crews.
Athletes' side of start area

The darker ice at the far end isn’t actually darker, it’s just an effect of different light conditions heading into the first curve on the track.
Looking down the track toward curve 1
Old road past the start houses

A few pictures of the infrastructure brought in to support the broadcasts. IBSF handles all of the broadcast production for the World Cup events, with English-language commentary and “international” feeds made available to broadcasters; the English feed is streamed live on the IBSF’s YouTube account, where recorded broadcasts are also found, but final runs are geoblocked for US viewers to ensure that we see NBC’s advertising (with some of the competitors edited out for time).
Water for track touch-up
Very chilly tower camera position
Tower camera operator
Camera position at the end of the run-out
Remote camera encoder
Camera mounted over track
Microphone for track sound pickup

I’ll follow up as soon as I’m able with photos of the competitors from each of the events I attended.

Posted in sports, travel | Tagged , , ,

Trip report: Usenix Large Installation System Administration Conference 2017

Hi folks, it’s time for some work- and computing-related stuff. I just recently got back from the 2017 edition of the Usenix Association‘s annual system administration conference, LISA’17, which was held in San Francisco. I’ve gone to most LISA conferences since 1998 (when it was in Boston), but this is the first time that it’s actually been held in The City that I can recall, even among the ones I didn’t attend. It was very expensive, with membership discounts for registration having been withdrawn and the conference hotel getting a whopping $239 a night (at the discounted conference rate), not to mention the cost of airfare; my total cost (paid by my employer, thankfully) will be around $3,000 — something I could not possibly afford if I had to pay my own way — and that’s without taking any tutorials. That total only includes a couple of meals, because one major problem with this conference is the excess of “free” food, most of which is quite unhealthy, piled on buffets at every break, meal-time, and vendor BoF (Birds of a Feather) event. I gained five pounds in one week, and because of the time shift I found it nearly impossible to use the hotel gym the entire week. (I did spend nearly all of Tuesday walking around San Francisco, which helped a bit, except that I also went to some of the city’s famous bakeries and chocolatiers — so the conference isn’t entirely to blame for my weight gain.) Most of this excess food is paid for by the vendors, but I’d really have preferred if they could have found something else for the vendors to sponsor, and limited “refreshment” breaks to nothing more than coffee and soda — especially in a city like San Francisco where getting out of the hotel at mealtimes would have been much more rewarding than many of the other venue cities.

I’ve seen a number of people, including invited speaker Tanya Reilly and tutorial instructor Tom Limoncelli, post their comments about the program, so I’m going to do the same. In general, my impression is a bit more negative than theirs, and I was left wondering if we actually attended the conference. However, I’d also note that this conference has become more and more dominated by corporate IT and especially Web startups, whose organizations, problems, and space of feasible solutions (generally starting with “throw money and/or developers at it”) are nothing at all like mine. I’m seriously considering not attending the next LISA in Nashville, given the lack of take-home value this time around, whereas at the last few LISAs I’ve had difficulty deciding between two or three great sessions in nearly every time slot. I said as much in the official after-the-conference survey, but I’m honestly not sure how much the program committee cares at this point, or whether they even see R&E shops like us to be within the target audience of the conference. (Hey, I’m not saying that scalability isn’t cool — but nothing that I do will ever scale higher than n=4.) The weakness of the program was a surprise to me, given that I volunteered for the Content Recruitment Team and actually had a chance to double-blind review many of the submissions; after that process I was quite excited about the program and I didn’t bother to review what actually made it before registering.

So anyway, let’s go through the program session by session. The opening plenary was, unusually, divided into two 45-minute slots, and I unfortunately had to race to the bathroom shortly into the first slot so I didn’t really get to see either speaker. Wednesday’s second session was the only one of the entire conference where I seriously had trouble deciding which track to attend; I ended up going to the Mini-Tutorial “Automating System Data Analysis Using R”, taught by Robert Ballance, who covered the same ground in more detail in a half-day tutorial on Monday that I didn’t attend. I felt that the compressed 90-minute format was not a good match for this material; I’ve done some elementary data analysis in R already, and it took well into the second half before he really got into the things that would actually be useful for me at work. This is the sort of thing where I suspect an interactive “lab” or “workshop” format would be much better, with a “bring your own data” element that could actually be more helpful than the synthetic datasets used to present this tutorial. I’ll have to remember to check out the materials for the full-length tutorial to see if there are techniques or packages in there that he didn’t have time for in the mini-tutorial. (Apparently I missed a great talk by attending this tutorial: the commentary on Matt Provost’s “Never Events” talk makes me think I should have gone — waiting for the video to be posted.)

The first conference lunch was held inside the vendor expo, and is effectively paid for by the vendors as an inducement to get the conference attendees to stop by their booths. I noted the absence of a number of vendors this time around: publishers like No Starch and O’Reilly, service providers like PagerDuty and DataDog, and major hardware and software vendors like Dell and Splunk were not to be found. Of the 40 total exhibitors, a quarter were there solely for recruiting purposes, and nearly as many were non-profit organizations there either to raise awareness or (in the case of Princeton University) to recruit sysadmins to participate in a study.

After lunch on Wednesday I stuck with the “Talks II” track for the rest of the day. I thought Silvia Boutros’s talk “Working with DBAs in a DevOps World” was interesting enough despite having no take-home value for me (I’m actually the closest thing we have to a DBA, at least insofar as I wrote a number of internal applications that use a database and have strong opinions about how it should be done). The next talk was “Queueing Theory Practice: Performance Modeling for the Working Engineer” was also interesting without being especially useful, although the most counterintuitive theoretical result that Eben Freeman introduced was one that I already knew (tail latency goes to hell when utilization goes over 80% in a memoryless single-service model with random arrivals). There was some other good stuff in the talk, about balancing coordination overheads against parallelism. The third talk in the session, Stella Cotton on “Distributed Tracing: From Theory to Practice”, had no plausible applicability to anything I do and I tuned out fairly quickly.

The second PM session on Wednesday was divided into two talks. (This confused me a bit: why were some sessions three half-hour talks and some two 45-minute talks? I found in general that the half-hour talks were too compressed: the speakers spent too much time on the motivation and not nearly enough time on the actual results or engineering they were supposed to be describing, and didn’t leave any time for the Q&A that might have brought out more interesting applications.) I sat through Daniel Barker’s “Becoming a Plumber: Building Deployment Pipelines”, but found it uninteresting and remember little of it. Then Tanya Reilly came up and gave one of the three best talks of the whole conference, “Have You Tried Turning It Off and Turning It On Again?” — which was about engineering services to survive a disaster like a power outage that takes down a whole data center. She pointed out that in a “microservices” would, our “technology stack” can easily degenerate into a “technology pile” unless careful attention is paid to avoiding circular dependencies — especially non-obvious multi-node cycles in the dependency graph. This talk really spoke to me because a big part of my responsibilities at work is specifically maintaining those services that have to be up and working before anyone else’s stuff can run — network, time, authentication, directory, database, and other services that the rest of the infrastructure needs to have in order to start up or in order to be managed by other members of my team. As one of the few people who has been around for every facility power outage going back to 1997, it falls on me in particular to worry about this dependency graph, and what happens when we (for example) virtualize services that might be required to boot the virtualization environment. Her talk also reminded me of a Graydon Saunders blog post from last year that it turns out was actually more detailed in my memory than its actual text. (tl;dr: Given modern global supply chains, how many people does it actually take for the global economy to function? Saunders guesses at least a billion.)

After the last session of the day it was time for dinner — in the expo hall, because, well, “free”, and also there really isn’t sufficient time in the schedule to get together with some people and find a restaurant before the beginning of the BoF sessions, especially in a city like San Francisco where the good restaurants are not all in one place and tend to be busy enough that reservations are advisable. The BoF track was pretty uniformly disappointing this year, with nearly all of the potentially interesting BoFs scheduled for the same two time slots, Wednesday at 7 PM and Thursday at 8. In addition to the OpenZFS BoF, which I attended, there were two different monitoring BoFs — we got kicked out of the room just as the first one was getting past the usual introductions, so I went to the second one on Thursday evening as well. I honestly think having fewer and smaller rooms would have been an improvement, forcing the BoF organizers to spread their slots out more. Of course, there is also the usual problem of the “vendor BoFs” — with more free food and alcohol — sucking the life out of the actual shared-interest BoFs.

Thursday’s plenary was moved to the evening slot, rather than the usual first-thing-in-the-morning schedule. So I started out the morning in a talk by Nina Schiff of Facebook about “Disaggregating the Network”, which illustrated why things that work at Facebook are not really practical for the rest of us (see Corey Quinn’s talk below). It’s a nice idea, to commoditize top-of-rack switching in the data center and replace proprietary switch operating systems with the same Linux stack and configuration management used on the servers in those racks, but it’s not a practical exercise for those of us who don’t build a whole new data center to roll out a new service. After that talk I moved into the other session to learn about “Charliecloud: Unprivileged Containers for User-Defined Software Stacks in HPC”, because our environment, though not an HPC cluster at the scale of Los Alamos, shares a lot of the same use model and many of the same software release management problems as HPC centers have — except that in our case, nearly all of that work is done by graduate students who should be doing something else. Containers in general are supposed to abstract away a lot of those problems by allowing multiple independent (and immutable) software stacks to execute on the same machine. There’s still a pretty long way to go before our environment — with lots of GPUs requiring matching kernel drivers, not to mention storage on AFS — can really take advantage of this.

In the second morning session, I went to Trever Vaughan’s talk “Operational Compliance: From Requirements to Reality”, and I have it marked in my program as one that I thought was good, but even after reviewing the slides posted online I’m not sure why. This was followed by two people from NEC Labs presenting a rehash of a 2016 paper (from some other venue) about a tool they developed (and which is not available outside NEC) to automatically cluster log messages and use the results to generate log-parsing patterns that will maximize the amount of useful data extracted under conditions of limited processing power per log message. The tool actually generates multiple sets of parsing patterns which vary in their coverage of the input messages; the user must then choose their desired CPU vs. completeness trade-off. The third half-hour talk in this session was Dan O’Boyle of Stack Overflow explaining why you should give all your cryptographic keys to Google or Amazon rather than allowing your operations staff to touch them. (I suppose that way you can claim you didn’t know that the government was reading all your communications because the National Security Letter wasn’t addressed to you.)

Thursday’s lunch was again “free” in the vendor expo. After lunch, I went to Chris McEniry’s mini-tutorial on “The Ins-and-Outs of Networking in the Big Three Clouds”, which was an overview of how client networks work in AWS, Google Cloud, and Azure, with an emphasis on translations between the names used and capabilities implemented by each provider. (Most important lesson: don’t expect broadcast or multicast protocols for things like service discovery or mastership elections to work in cloud providers!) At the 3:30 PM break, I bailed completely, and did not attend the afternoon plenary (a panel on “Attracting and Retaining a Diverse Workforce”), although those who did thought it was both good and too short. Instead, I took the F-Market streetcar all the way to the “wharf” end, in a tacky, touristy area near the terminus of the Powell-Hyde cable-car line, Ghirardelli Square, and Fisherman’s Wharf. After seeing the long lines waiting to ride the cable car, I chose to talk back to the hotel instead — but following the steep hills of the Powell-Mason and California St. cable cars rather than the flat Embarcadero route of the streetcars. I returned in time for the conference reception (more “free” food that I shouldn’t have eaten), which was in the main atrium lobby area of the hotel this year. (In the distant past, they would rent a museum or some other interesting venue, but in recent years it’s been confined to a hotel ballroom, so I can’t really claim to be disappointed.) The reception was followed by more BoF sessions, of which I attended the second monitoring BoF, which went over its scheduled time by a bit, and the “DevOps Poetry Slam”.

I should say a bit more about monitoring/metrics BoFs: there has been one at every single LISA I’ve ever attended, and it’s quite clear that there is still a great deal of unhappiness with the solutions different organizations have adopted — whether it’s over resource demands, cost of third-party software and outsourced monitoring services, or the difficulty of building dashboards that actually collect all the business-relevant metrics, it’s clear that there’s still a lot of work to be done. We’re not especially happy with our setup either, but we have exactly zero budget in either money or personnel for the sort of solutions that might have a chance of making us happy. Some day, one of us will take it on, and then when that person leaves it will fall into disrepair.

Friday began with a talk by Martin Van Horenbeeck, “An Internet of Governments: How Policymakers Became Interested in ‘Cyber'”, which was exactly what it says on the tin. Following that was a talk by Evan Gilman and Doug Barth about work they had done when both were at PagerDuty. The title of their talk was “Clarifying Zero Trust: The Model, the Philosophy, the Ethos”, but I felt like that oversold the content somewhat. It was interesting to see the specific problem that they were trying to solve, how it related to their business requirement to operate across multiple public cloud providers, and their choice to use IPsec and packet filters to enforce security policies rather than the VPN offerings of each cloud provider (which would harm availability by creating single points of failure in each availability zone). I have it noted on my program that I didn’t much care for the presentation, but I’m not sure why. (The Zero Trust model is a very attractive one for us — indeed, many of the fundamental ideas where developed at MIT in the 1980s — but fails to meet many of our users’ needs or threat models.)

For Friday’s second morning session, I started out with Peter Lega’s talk “DevOps in Regulatory Spaces: It’s only 25% What You Thought It Was”, which was basically about how you convince compliance people in a regulated industry that modern software development methodologies really can address the risks that regulators are most concerned about — in Merck’s case, by integrating the compliance documentation and procedures with the development process, reducing the adversarial relationship between developers (“move fast and break things”) and regulators (“first do no harm”). I then switched sessions to see Corey Quinn’s talk, “‘Don’t You Know Who I Am?!’ The Danger of Celebrity in Tech”, which was excellent and exactly on-point to many of the concerns I’ve had with LISA programs in the past. Quinn made the point that most of the world is not Google, Facebook, Netflix, or Twitter, and that (a) it’s probably not a good idea for organizations that aren’t anything like those companies probably should think carefully before adopting the technology or the methodologies that those companies use. Quinn gave the example of a bank IT director watching a talk about Netflix’s “Simian Army” and wanting to take that approach back home — the methodology that’s appropriate for a company that doesn’t actually do anything important (sorry, Netflixers) may not be something you want people doing when people’s lives (or money) are at stake. Quinn also lit into the other side of the problem, people who work for those companies and use their resumes to shut down discussion of techniques that actually would be appropriate for someone who isn’t working at one of the top five Web origins. I strongly urge anyone to watch this talk when the video is released by Usenix in the coming weeks. (Yes, I got up and asked a question.)

After yet another “free” lunch, the final regular session of talks started with Ben Hartshorne of talking about various techniques for sampling application trace data, with a particular emphasis on using business requirements to determine appropriate sampling rates for different events (e.g., sampling important clients at a higher rate, but sampling successful transactions at a lower rate than errors). I didn’t find this talk especially interesting, but I wanted to get a good seat for David Blank-Edelman’s talk, “Where’s the Kaboom? There Was Supposed to Be an Earth-Shattering Kaboom!” I didn’t read the abstract of his talk ahead of time, so I honestly had no idea what it was about — it turned out to be about how lessons we can take from the demolition industry. David delivered the talk with his typical flair, but I did think it was less interesting (and entertaining) than several other talks on similar themes he has done at past LISAs, some of which rated a plenary slot.

After another refreshment break (finally, the “free” food was bags of mass-produced snack foods, which I had no trouble resisting), the conference closed with a plenary address by Jon Kuroda of UC Berkeley. Jon avers that this exact talk was rejected in 2013, but he decided to resubmit it this time around, and the PC was so excited by it that they asked him to speak for 90 minutes rather than just 45. He went into the history of several modern engineering fields, including space flight, nuclear energy, and (the specific focus of this talk) commercial aviation: along with computing, all date to the early post-WW2 period, but unlike computing, all have developed strong protocols for reasoning about and ultimately ensuring safety. (Medical technology and pharmaceuticals, too, date from this period or just slightly later, and have very strong safety cultures now.) Jon went through a number of well-known commercial aircraft accidents, and identified how the operator’s corporate safety culture made these incidents either better or worse than they might have been, exploring what lessons we should bring into our industry as computing systems are more and more involved in making decisions that can cause serious individual harm.

I returned home on Saturday morning, but it’s taken me until now to actually write this. Later today, I’ll be off to Lake Placid to watch some athletic young people in skinsuits hurtle down a mountain face-first while balanced atop a tea tray — which is how I had enough time this evening/morning to finally write this summary.

Posted in Computing, travel | Tagged ,

Recipe quick takes: Claire Ptak’s Egg-Yolk Chocolate-Chip Cookies

Well, that backlog I mentioned in the last post is still there, but I made something this weekend, and since I didn’t actually take many photos, I figured I might as well write about it while it’s still reasonably within reach of memory.

I’m a sucker for a new recipe for chocolate-chip cookies (or brownies — and I did the rye brownie recipe from the same source a few months ago). This past week I received a big order (more than 6 kg) from Chocosphere, all various Valrhona products, mostly repackaged from their commercial bakery line (you can buy most of these products in 2 kg or 3 kg packages, which was way more commitment than I was interested in, but Chocosphere repackages them into home-baker-friendly 1 kg bags), including their 60% cacao chocolate chips (“chips noires”). By US standards you’d probably call them “mini chips” — they aren’t as big as the standard Nestlé/Hershey/Ghirardelli/Guittard cookie chips — but I figured I might as well find a cookie recipe to try them in, and ran across the “Egg Yolk Chocolate Chip Cookies” in Claire Ptak’s The Violet Bakery Cookbook (Ten Speed Press, 2015; p. 140). She attributes the basic idea — of changing the texture of the cookie by using only egg yolks instead of whole eggs — to French pastry chef Pierre Hermé.

I don’t have a photo of the mise em place here. The ingredients are the ones you’d expect, but this recipe is formulated for “even” proportions in metric measures: 250 g softened unsalted butter, 200 g light brown sugar, 100 g granulated sugar, ½ tsp vanilla extract, 3 egg yolks, 325 g all-purpose flour (betting you could use pastry flour here, since the recipe was probably developed for soft wheat), 1¼ tsp kosher salt, ¾ tsp baking soda, and 250 g dark chocolate chips (“or broken-up bar of your favorite chocolate” — not many chocolates come in 250-gram bars!). The recipe proceeds by a modified creaming method: ingredients are combined in the normal order, in a stand mixer, but only until just mixed; Ptak specifically cautions “you are not aiming for light and fluffy here”. The recipe yields about 1175 g of dough, and for the stated yield of 16 cookies, that’s 70–75 g of dough per cookie.

The second departure from the usual method is that the dough is portioned and frozen completely before baking. I found experimentally that using a #20 disher — the most common size, which if you only have one disher is probably the one you think of as “the ice-cream scoop with the sweeper thing” — and leveling off the open end against the side of the bowl comes to nearly exactly 70 g (a few were as low as 65 and some others were as high as 80 but I’m picky). These are placed in a parchment-lined baking pan — spacing doesn’t matter — covered in plastic wrap, and frozen for at least an hour (I left mine in overnight). To bake, the solid balls are placed on a parchment-lined cookie sheet, spaced widely, and allowed to thaw slightly (5–10 minutes) before baking in a preheated 355°F (180°C) oven for 18–20 minutes. (I don’t know who did that unthinking Fahrenheit conversion: most people don’t have oven controls graduated that finely, and even for those of us who do, oven temperatures cycle within a range of 20 F° or more anyway. I usually convert 350°F as 175°C but 180°C isn’t materially different. Still, my oven has a digital display so I set it to 355° rather than 350°.)

Cookies, baked and unbaked, on cookie sheets
The photo above shows one sheet of fully baked cookies on the left, and another sheet of still-frozen dough balls on the right. (I ate one of the cookies the previous night, before it was frozen, which is why there’s one missing — it was the last of the batch and a bit runty at only 55 g; probably could have gotten ten more grams for a nearly-full serving with less sampling.)

Single cookie on a plate
The cookies were allowed to cool completely on a wire rack. Above is a close-up view of a single cookie on a plate.

Cookie broken in two to show texture
I broke the sample cookie in two to demonstrate the depth and texture. You can see that this recipe is a good match for “mini” chips like the Valrhona ones I used: larger chips would have been less evenly distributed through the dough, but in this style every bite of cookie has some chocolate in it. (Click on the image to see the full resolution: you can see how, even 10 hours after baking, the interior of the cookie is still visibly moist. These cookies are very buttery!)

Overall, these cookies have an extremely tender mouth feel, very different from the crisp texture of the Default Recipe, but still quite enjoyable — they’re almost like a shortbread with chocolate chips, but moister. The flavor has none of the toffee notes of the Default Recipe, either, belying its much higher ratio of brown to white sugar, which must be a result of the much lower baking temperature (both oven and dough temperature). Starting from frozen dough definitely keeps the cookies from spreading quite so much, giving them a good thickness — I make the Default Recipe with 50-gram portions, and these are about the same diameter despite having half again as much dough. Highly recommended. I’m not sure I’ll share any of these with my work colleagues.


In the absence of nutrition data for the Valrhona chips, I substituted the readily available Ghirardelli 60% dark chocolate baking chips for this computation, although they are of the larger “standard American chocolate chip” size.

Nutrition Facts
Serving size: 1 cookie (70g dough, uncooked weight)
Servings per recipe: 16
Amount per serving
Calories 351 Calories from fat 171
% Daily Value
Total Fat 19​g 30%
 Saturated Fat 12​g 59%
Trans Fat 0​g
Cholesterol 68​mg 19%
Sodium 145​mg 6%
Total Carbohydrate 42​g 14%
 Dietary fiber 2​g 7%
 Sugars 26​g
Proteins 4​g 9%
Vitamin A 10%
Vitamin C 0%
Calcium 0%
Iron 12%
Posted in Food | Tagged , , ,

Still here

I have worked up quite a backlog of posts that need to get written. I was hoping to make some progress on that this weekend, but that didn’t pan out. Neither did the 55-mile bike ride I was planning on doing. (It’s hard to do stuff in the morning if you don’t wake up in the morning.) I got distracted by Jean Yang’s blog post “The Genius Fallacy“, and felt like I ought to respond in some way to it (or to the things it reminded me of), but in the end I couldn’t figure out what actual point I wanted to make, so that didn’t happen either. (That’s how most of my projects founder, to be honest: they start with an idea, or more often a snipped of imagined dialogue, but don’t manage to develop enough to actually be worth sitting down at the keyboard and turning into writing.)

Here’s the other stuff on my agenda:

  • A short write-up on a brownie recipe I didn’t much care for
  • An essay on some questions related to gender inspired a bit by the Worldcon 75 program
  • A few photo collections from a day and a half of post-Worldcon walking around Helsinki
  • Another recipe write-up on some chocolate-chip cookies I haven’t actually made yet
Aside | Posted on by | Tagged

More musings on commuter rail

Earlier this week, I tweeted this:

There are two, largely independent backstories to this tweet. The first is that I’m going back to Helsinki next week to attend the 75th World Science Fiction Convention, and when I was in Helsinki last March and April, I was inspired to write a whole lot on that city’s excellent transit system (see post 1 and post 2). The second reason is the current Commonwealth Ave. overpass reconstruction project in Boston, which was projected to have some deleterious effects on my commute, and which made me take a more serious look at the possibility of taking commuter rail into work — at least for the duration of the construction. I ultimately decided that paying $22.50 a day plus an extra hour and a half of my time was not worth it, given that my car commute is out of peak hours and costs about the same when you factor in parking, tolls, and fuel. But that made me think about the state’s current level of (dis)investment in public transportation infrastructure and what it would take to get me out of the car, on those days when schedule or weather don’t allow for a bike commute (which is more than half the year). I concluded that commuter rail would have to offer sufficiently frequent service, even at the hours I work, and get me from Framingham to Kendall Square in less than 45 minutes — which is not as good as my car commute, 35 minutes parking space to parking space, but is at least in the same ballpark, and if implemented properly would be significantly less variable.

How could you do that, given that the current Framingham-to-South Station run is scheduled to take 49 minutes, and then there’s the Red Line beyond that? The answer, as it turns out, is pretty simple: Electric Multiple Units, or EMUs — a standard passenger rail technology throughout the world, which (when combined with the appropriate investments in track, overhead electrification, and high-level platforms at stations) can significantly reduce travel times by accelerating much faster than conventional locomotive-hauled trains, especially the diesel locomotives currently used throughout the MBTA commuter rail network. Helsinki has such a system (actually the only commuter-rail network in Finland — the rest of the country isn’t dense enough to support it), which clearly demonstrates that a cold climate in a maritime city is no obstacle to successful implementation. Helsinki’s system provides service on multiple lines from the central business district to the airport — a distance similar to my commute — every fifteen minutes. Helsinki uses a customized cold-weather version of the Stadler FLIRT for most of their services, and I know that a number of US transit agencies have ordered FLIRT equipment for their own commuter rail services, so I looked up the performance details and sat down with a simplified line diagram and a calculator to figure out what that service would look like.

The FLIRT is typically configured for a maximum speed of 160 km/h (99 mi/h). At a typical acceleration of 1.02 m/s/s (depending on configuration, this can vary from 0.8 to 1.2 m/s/s) it takes 43.5 seconds and about six tenths of a mile. (Actually, I chose that acceleration value to make it work out to exactly 0.6 mile or 965 m!) I’m assuming that the entire Framingham–Worcester line is rated for 99 mi/h. (It’s not, but remember, we’re what-ifing an investment in better service, and that would involve electrification, trackbed improvements, new platforms, and possibly some grade crossing improvements or eliminations.) I also assume that there’s a “terminal zone” between South Station and the future West Station where speeds are limited by interlocking (junctions with other lines and switching into South Station). I assume that the train can accelerate and decelerate at the same rate, and that this would be done in practice (probably not) just because it makes the math come out easier. Finally, I assume average dwell time at each station is 30 seconds — and since I don’t take the commuter rail right now I don’t know if this is overly optimistic or pessimistic.

So what does this schedule look like? Well, consider, for comparison sake, the current MBTA train 552, which leaves Worcester Union Station at 8:00 AM and arrives at South Station at 9:06, for a scheduled travel time of one hour and six minutes. This train runs express from Worcester to Yawkey, so it only has two station stops aside from the termini — and it creates a huge gap in the schedule for everyone else, because the Framingham–Worcester Line is only two tracks and there’s no way for an express to pass a local train making an intermediate station stop. Now compare that with the following schedule, making all station stops:

Worcester 800
Grafton 804
Westborough 806
Southborough 811
Ashland 813
Framingham 816
West Natick 818
Natick Center 820
Wellesley Square 822
Wellesley Hills 824
Wellesley Farms 826
Auburndale 828
West Newton 829
Newtonville 830
Boston Landing 833
(West Station) 834
Yawkey 836
Back Bay 838
South Station 841

Change ends at South Station and the same trainset leaves for Worcester at 9:00. What’s more, you can start a second trainset at Framingham, also at 8:00, and it gets to South Station at 8:27, so it can become the 8:45 outbound. (In the future, of course, you’ve also converted the Grand Junction branch and it gets Framingham residents a one-seat ride to Kendall in 25 minutes!) Repeat the same pattern every half hour from 6 AM to 11 PM, and you’ve made an enormous improvement in regional mobility and given thousands of people a practical reason to get out of their cars and onto the train. It takes, I think, four trainsets to run this service, not counting spares shared with other lines.

Well, it was a good dream, anyway. We all know that something this useful has absolutely no chance of ever making it through the MBTA bureaucracy or Beacon Hill. Numbers available on request if you want to check my math.

Posted in Transportation | Tagged , | 9 Comments