The “Duh” factor, Forecasting, and Social Science, Part II

Andrew Gellman writes about Duncan Watts’ new book: Everything is Obvious* *Once You Know the Answer, which I also acquired right along with Future Babble, which I wrote about yesterday. However, I haven’t gotten through Watts’ book yet, so I will leave you to read Gellman (always worth it) and an excellent point:

Duncan Watts gave his new book the above title, reflecting his irritation with those annoying people who, upon hearing of the latest social science research, reply with: Duh-I-knew-that.

It’s the “Dur, I knew that” phenomenon that I want to take up in the light of forecasting and the whipsaw of social science research.

The Brookings example I wrote about last night is a great example of that.

So Brookings publishes a study that takes a different approach in –but builds on– a long tradition of mode choice studies. Their findings and rankings are counter-intuitive. They point out something that not “everybody knows.”

And the response? The response is that Nate Silver uses his platform to traduce the Brookings study because it doesn’t mesh with what “everybody knows”–that New York City is the king of US transit. He doesn’t know about all the previous social science research that yes, concludes that lots of people take transit in NYC, so he points out just how wrong wrong wrong wrongity wrong wrong the Brookings people are. And he’s an expert on numbers, by God, so he KNOWS their numbers are just plain wrong.

So how dare they publish results that don’t mesh with what “everybody knows”?

But then, if they do find results that mesh with what everybody knows, then people like my grandfather can say “Cha! They spent how much to find out what, exactly? Something any damn fool already knows. I coulda told ’em that .”

A whipsaw. If your results are counterintuitive, you’re an idiot. If your results back up intuition, you’ve wasted our time and stated the obvious.

I think this problem comes from the peanut gallery of the media and the basic inability that most people have in understanding research as a process rather than as a discrete outcome. In Silver’s world, it doesn’t occur to him that he doesn’t have all the right questions on how to analyze transit because he hasn’t read the forty years of mode choice research that ask and answer the questions he thinks he’s taking the Brookings researchers to school on.

He’s not in any real position to critique the study, but he gets away with it because in order to know how wrong he is, his audience, too, would have had to have read the past 40 years of transit research and the Brookings report, which means there are about 10 people on the planet earth who have done so. (And the nine other than me have better things to do than respond to Silver; they’re busy writing their manuscript.)

And Silver is writing to an audience of people who don’t get that all research is partial. I think people read studies, and they want black and white, soundbite results that just tell them the answer–not the messy, back-and-forth process of real, time-consuming knowledge creation in social science. So transit makes people thinner. Transit cleans up the air. Like medical research: Coffee is good for you. Coffee is bad for you. Red wine is good. Red wine is bad. And so on and so on.

So what does that mean with forecasting? Forecasting particularly suffers in the whipsaw of “tell me what I already think is right” but “don’t tell me what I already know”. ” Forecasts require priors. So that leaves us little patience for the unexpected forecast–the surprise forecast–even if those making it have some good reasons to believe that trend will kink. The exception are business forecasts.

In the end, I think it’s probably likely that the best way to increase the accuracy of forecasts is to have a lot of people prepare forecasts for the same phenomenon independently–rather than the current practice: pay one group to derive a range. But that’s expensive, and it flies against the idea that one analysis should give us the answer.

Here Dan Gardner has some good points in Future Babble. Are forecasts lousy because forecasters are lousy, or do we place them in the position of being lousy because of our expectations and impatience with the complexity of knowledge creation in dynamic, socio-cultural contexts?

The answer: both.