Nate Silver thinks he knows transit, and doesn’t

Over at Five Thirty Eight, Nate Silver takes a swing at the Brookings study that I highlighted the other day. I wrote a rather lengthy response because Silver is just so off the mark that I responded (late at night, when I was tired, so I sound like an idiot) and I’ll respond here:

Given how little Silver knows about transit and transportation, the declaration that the Brookings study “”asks the wrong question” is way off the mark. There are a lot of mode choice studies in the world that have already asked the questions that Silver seems to think are the right and important ones. Just because he hasn’t read these studies doesn’t mean these questions haven’t already been asked and answered–a lot. Nor does it mean that Brookings was wrong for not replicating the hundreds of mode choice studies already out there. So the question “do people have a choice?” Is VERY well mined in the research. The subsidiary question “What are the characteristics of those choices” is also thoroughly mined.

And all of these studies find pretty much what Silver is banging on about–that NYC and these other regional big systems come out well in terms of individual mode choice. It’s just that Brookings is also right–the world doesn’t need another study that finds that lots of people in New York take public transit. We already know that.

Instead, what Brookings is trying to get at is the geographic supply of jobs and transit. They are looking for supply CONSTRAINTS, NOT demand. And that’s worth looking at. The question they ask is:assuming you have residential access to transit (a subset of the population), and you’d like to take it, could you feasibly get to a job? That’s a good question because it hasn’t been answered as much, and the answers give us some clues as to why transit is often less popular as a commute mode than it might be.

Nowhere in their report does it say that Modesto transit is “better” than NYC. Instead, the interpretation is, simply, that a higher percentage of people who have transit can get to a higher percentage of area jobs. With a measure of jobs and transit coverage like this, geographically small regions are BOUND to rank better than larger ones, which is unfortunate, but hardly impossible to interpret within the context of existing transit research.

God, people. Does EVERY study about transit have to WET ITS PANTS about NYC/SF/Boston to tell us a piece of the puzzle about transit service quality? How’s about if everybody who studies transit starts off each manuscript with:

New York City is the bestest of the bestest. Except for Tokyo, which kicks its ass. But New York is really best. Best best best. Nothing better in the whole land. Best, I tell you, BEST!

Now may I ask a question about public transit where the answer isn’t New York? Because there are other places in the world that aren’t New York, and transit is meant to work there, too.

Are we better off with wrong forecasts, or no forecasts at all?

So I have been reading a book called Future Babble: Why Expert Predictions Fail and Why We Continue to Believe Them Anyway by Dan Gardner.

With such a title and cover, you can imagine what he says about experts and forecasting. The book was reviewed by Kathryn Schultz in the New York Times, and she did so brilliantly, so I will send you over there to read, save for this, which captures the book’s campy and self-indulgent contradictions nicely, where Gardner equates forecasting with paid psychic services:

To ignore this difference is to stray perilously close to anti-intellectualism. And Gardner, despite his better impulses, drifts that direction in other ways as well — for instance, by pitting “all the smart people” against “ordinary Americans.” Wait: Ordinary Americans aren’t smart? Smart people aren’t real Americans? Such distinctions aren’t just invidious. They also dodge the real issue, which is that expertise and intelligence are not intellectually or morally equivalent to charlatanism. Indeed, they often serve us exceptionally well.

Gardner serves up a nice helping of nose-rubbing for James Howard Kuntsler, one of the most shamelessly self-promotional profiteers in the urbanist universe, so, as Schultz points out, that bit is quite enjoyable for those of us who are petty.

I’d also second Kathryn Shultz’s recommendation to read Philip Tetlock’s Political Judgment: How Good Is It? How Can We Know? , which is an excellent book. Schultz’s own contribution, Being Wrong: Adventures in the Margin of Error, is also a nice contribution, and much more thought-provoking than Gardner’s rant.

But the greater point about how often forecasts are wrong is still worth thinking about. Are we better trying to know the future, and if we aren’t, well, how should we make decisions? Who has a better idea?

It’s not clear there are better ideas on how to think ahead. There’s part of me that thinks as much as forecasts are wrong, and as much as forecasts can serve the interests of power, that all the anti-forecasting rhetoric also reinforces “who screams louder, who has a bigger stick” politics of urban project development.

One indicator that there aren’t really any better ideas out there came to me at the HSR Symposium I was at a few weeks ago. Peter Calthorpe, who always markets himself has having the best, new, cutting edge idea, was describing his fabulous, much-improved-over-those-dumb-engineers’ travel demand forecast with a new whizbang *urban travel simulator tool*. Yes, this will be yet another deterministic model with inappropriately optimistic assumptions about how much Calthorpe’s pet design ideas reduce auto travel, but it’s not a forecast. It’s a *simulation.* Based on *empirical elasticities*. Soo much better than a forecast, you betcha. It will simulate outcomes, too. Like how many fat people we’ll take off the streets when we run the world the right way and design controls mankind the way it should.

So Calthorpe creates a forecasting tool and calls it a simulation, and suddenly he’s got the magic predictor bullet. But the temptation is so apparent: I’ll prove with my truthy numbers that I have the magic urban recipe. It’s worked for engineers for years. Why not the New Urbanists? Forecasts are a means of establishing “need”, right? (See: The Rhetoric of Economics by Dierdre McCloskey, a 100 percent required read.)

The question for forecasts/simulations is always what are your priors? Are any priors any good at all? Can the generic priors that Calthorpe and his group stuff into their new simulator tool really help us?

People I genuinely respect tell me, for example, that the CalHSR forecasts are way optimistic. While I’m inclined to believe them, it’s a criticism I haven’t echoed here simply because I really have no idea because I don’t think there are good priors. HSR systems in other places don’t strike me as good priors. Neither does intercity demand on other modes.

The world is meant to end tomorrow, May 21, according to one prediction. So we’ll see how that goes.