Seattle Transit Service Hasn't Gotten Worse, It's Just Different (And Probably Better)

UPDATE:

 I found out just before publishing this that Sightline had already written a similar post this morning, but there's a little divergence in our conclusions and our focus, so hopefully this adds something to the discussion.

ORIGINAL:

A new report from Transit Score is out (they also run Walk Score), and the news isn't good in Seattle. Since 2012, Seattle's Transit Score dropped from 59 to 57, the largest drop of any city in the top 25 (Seattle's #7 rank was unchanged). One of the biggest changes to service over past two years has been the rollout of the sorta-BRT RapidRide service, so naturally, the Seattle Times is asking: Has RapidRide helped or hurt Seattle bus service?

It depends on what you value. If you've read anything from Jarrett Walker's blog Human Transit (and if you're reading this, you probably have), you're aware that transit agencies have many choices about how they serve the public, and that they have to balance coverage with ridership. (Walker has a good summary of the issue here.) At the extremes, you can provide infrequent service to the entire regional area (maximum coverage), or you can focus 100% of your resources into the highest-ridership routes available (maximum ridership/efficiency). In practice, all transit agencies fall somewhere between these extremes, and Seattle is no different.

In the case of RapidRide, the goal was to emulate some of the characteristics of bus rapid transit, i.e., greater stop spacing, off-board fare payment, dedicated bus lanes, and increased frequency. RapidRide was implemented in high-(bus)-traffic corridors to get the most bang for the buck, both in terms of ridership and congestion mitigation. Investing in the buses and physical infrastructure of these routes costs money, so some low-usage routes were eliminated to pay for it (hailed by some, scorned by others). In other words, King County Metro made a decision to increase efficiency at the expense of some coverage.

Was this the right decision? I would argue yes; I'm sure quite a few people would argue no. In a city where a growing share of residents are living in a relatively small geographical areaand those residents are far less likely to own cars—the case for consolidated, high-frequency transit service grows stronger and the benefits of coverage decline. That doesn't mean that we should move to the extreme of maximum efficiency, minimum coverage service, or that coverage has no value. It just means that in a world of scarce resources we have to make value judgments, and we've chosen to emphasize service where more people will benefit from it.

TransitScore's methodology isn't explicit, but two of its measures seem to pretty clearly put a premium on coverage: first, the score itself is a sum of all routes, and each score is modified vehicle type (rail, ferry, bus, etc.), frequency, and how far away the nearest stop is. Notice that this doesn't take into account ridership, and although they don't say exactly what the distance penalty or frequency bonuses are, it seems likely that having a bunch of low-frequency bus routes nearby probably gets you a better score than one or two high-frequency routes. 

Second, and more simply, Transit Score penalizes bus stops that are further away from people's homes, and stop consolidation inevitably increases the average distance to a bus stop. If I remove half the stops along a route, some people will have to walk further to get to or from their nearest stop, and no one has a shorter walk:

So yeah, maybe it kind of sucks for that guy. But while Transit Score considers this a purely negative change, stop consolidation serves a purpose: it reduces the amount of time the bus sits at stops loading or unloading passengers. It's especially beneficial to people riding long distances and can significantly speed up commute times, bringing them closer to parity with cars. This isn't to say that stop consolidation is always good, or that there aren't benefits to having frequent stops. Stop spacing really is just a derivative of the coverage issue, so once again, it's about values. 

Transit Score's algorithm seems to value coverage above speed or efficiency, but that's just one way to balance these two competing concerns, and there are as many alternatives as there are people. If you're one of the relatively few who lost out in the recent restructurings, you're likely to agree with the decline in Seattle's score, and I can't really fault you for that. But while some people will win and others will lose any time these types of changes occur, I'm pretty confident that the winners significantly outnumber the losers here, and I think that's a change for the better.

UPDATE 2:

 In advance of any accusations of racial insensitivity, I did want to recognize that many of the routes eliminated were in lower-income areas disproportionately represented by racial and ethnic minorities. I think that's worth looking into, but I also want to make the point that a lot of the route reduction occurred because the opening of the Central Link light rail system made many of those routes duplicative and redundant. It may be more accurate to look at the neighborhood scores in Central and Southeast Seattle prior to 2014 as artificially inflated by these duplicative services, and to view their new, lower scores as a more accurate reflection of the actual quality of service that's prevailed since Central Link opened.