“Students and the enormous revenue they bring in to our institution are a more valued commodity to us than faculty,” Dean James Hewitt said. “Although Rothberg is a distinguished, tenured professor with countless academic credentials and knowledge of 21 modern and ancient languages, there is absolutely no excuse for his boring Chad with his lectures. Chad must be entertained at all costs.”
I have been reading the New Yorker for nearly 25 years now, subscribing even when I really could afford neither the time nor the money to be doing so. But it is like a friend now, and I really can’t do without it. The Economist routinely gets chopped when household austerity comes into play; the New Yorker…I just can’t. John McPhee’s pieces have rolled in over the years, always welcome, even though he routinely picks topics I am not interested in at first glance: geology, fish, basketball stars, forests. He has over the past year been releasing long pieces on writing that you must immediately read if you do have access to their archives.
and this week’s:
Draft Number 4–on blocks and the difficulty of the first draft.
My favorite part of this last selection thus far is McPhee’s description about how different his two daughters are as writers. It serves as a nice reminder for PhD mentors that every student is different because every writer is different:
Jenny grew up to write novels, and at this point has published three. She keeps everything close-hauled, says nothing and reveals nothing, as she goes along. I once asked her if she was thinking about starting another book and she said “I finished it last week.” Her sister, Martha, two years younger, has written four novels. Martha calls me up nine times a day to tell me that writing is impossible, that she’s not cut out to do it, that she’ll never finish what she is working on, and so forth and so on, et cetera et cetera, and I, who am probably disintegrating a third of the way throughout an impossible first draft, am supposed to turn into the Rock of Gibraltar. The talking rock: “Just stay at it; perseverance will change things.” “You’re so unhappy you sound authentic to me” “You can’t make a fix unless you know what is broken.”
This is about as good a description of mentoring as I have ever seen; taking on need as it arises, no matter how it manifests, and no matter where your own need is at the moment, even as your own fears about the process are thundering in your head.
Another bit of brilliance:
One falls into projects like slipping into caves, and then wonders how to get out. To feel such doubt is a part of the picture–important and inescapable.
And yet another:
Jenny said “I can’t seem to finish anything.”
I said, “Neither can I.”
Word to your mother.
From Bloomberg, the restoration of books looted from private Jewish libraries to the families:
The Central and Regional Library Berlin estimates it has as many as 250,000 books that are potentially looted. More than 40,000 were seized from the homes of Jews who were deported or murdered. So far, the library has returned 345 books and bookplates to 29 heirs. Peter Proelss, a historian investigating the collection, says he faces “a mountain of books.”
So I’m more than a little bummed that USC turned down my request for a *really* small amount of money to support my book writing. It’s really hard when not even your employer believes in your project enough to give you a month to work on it. But…that’s kind of how work is. When you are a graduate student, it’s annoying that you have these committee members who are always saying “yeah, this project can work, but you are not doing it right yet” or “it’s getting there, but it needs more.” Don’t get me wrong–I get it–it’s annoying.
But when you are done, you are likely to face a world where nobody says “atta girl” but you and your personal support system. Nobody cares about this book I am writing but me–worse than that, one of my most supportive mentors actively seems to dislike the book. He had a book in his mind that he always associated with me, and he just plain doesn’t like the direction I am going. But it’s not his book, and it’s not his time. It’s mine. You just work on what you think you should work on.
So I have to say that it did my teeny tiny heart some good when Mary Beard, probably the most well-known classicist in the world at the moment (or at least neck-in-neck with Barry Strauss) shared her recent failure to receive funding, and referred us to a tale from classicist Edith Hall, for a way more important project than my little book:
I failed to get funding four years ago for a more European-facing version of this project from the European Research Council, whose referees (distinguished classical scholars) could not understand its ‘relevance’ to anything in which they were interested. I failed first time round with the AHRC, the British funding council, because one of the referees alleged that my style of communication had ‘a streak of vulgarity’ (which might be thought to be useful in a project about social class); s/he gave the proposal a 4 when the two other reviewers both gave it the top mark of 6. I went through the complaints procedure, which took four upsetting months, even ending up with a brush-off from the Parliamentary Ombudsman, who said that the AHRC had ‘followed their published procedures’.
Keep going. Yeah, as the de-motivational posters reinforce, you may be persisting in folly. But you won’t know until you are finished so you might as well finish.
Meanwhile, you may have my permission to meltdown a little at personal setbacks:
Ok, so my rant at AER prompted gentle readers to note some new items and some things we should consider. Cosma Shalizi, who writes the delightful blog, Three-Toed Sloth, notes that my ire towards AER is a mite misplaced, as the Reinhart-Rogoff manuscript appeared in one of the issues of proceedings done by AER of the economist’s national conference, and those papers are not peer-reviewed. Cosma pointed me to this excellent entry by Victoria Stodden, whose blog I clearly need to follow, on why Reinhart-Rogoff slipped through and how the state of the practice needs to change.
All of her suggestions are spot on, but I particularly like her idea around a site where reviewers can run code easily to replicate results:
This is typically nontrivial, since having the code and data doesn’t guarantee replication is either possible or achievable without significant effort. I have been working on a not-for-profit project called RunMyCode.org which could help reviewers by providing a certification that the code and data do regenerate the tables and figures in the paper. The site provides a web interface that permits users to regenerate the published results, and download the code and data
But still, HANDS ON HIPS, AER. YOU ARE CONSULTANTS TO POWER, A POSITION YOU HAVE CULTIVATED CAREFULLY. WHY does AER still get to hold the position it does as one of the “A” journals when it is publishing issues of papers that have not been reviewed…at all? Shouldn’t people be expected to put an asterix by the title of such an entry? I’m sorry–but DAYUM. If the sociologists did that, it would be yet further proof of their “junk research” in “junk journals.” And…the paper has been cited by economists in economics journals–nearly 500 times. Now, maybe all those cites are critical, but we are still dealing with the fact a paper which was NOT peer-reviewed IS CITED 500 TIMES. Help me understand how this demonstrates a discipline in which the utmost care is taken before policy prescriptions are advocated?
Remember, we are talking about a paper that has such big assumptions–unexplained weighting and dropped cases–that if one of my students presented such a paper in graduate seminar they would get public spanking.
Still more, a paper not peer-reviewed gets treated by the WashPo like it represents a consensus position among economists, no matter how often the paper cited or didn’t cite it. Gentle reader Jesse Richardson sent this op-ed to me, where the Op-Ed board of the WashPo demonstrates their innumeracy and makes excuses for lazy reporting. No, I don’t think Reinhart and Rogoff are responsible for global austerity, which is not global, btw. They saw an important research question in today’s most significant macro-economic debate, and they attempted to address it. That is, in general, the point of policy research, and I’m glad there are people attempting empirical verification of various approaches.
And moreover, academics don’t have that kind of influence. The reason why this paper got lots of play was the Havard + a finding that is convenient for a certain ideological position.
What I do think these researchers are responsible for is their analysis. If you are going to run with the big boys and claim to have a policy-relevant ratio or threshold for a big-deal policy question…you quadruple-check your G-D Excel and you go through a big demonstration of your robustness checks. It’s entirely possible that Reinhart-Rogoff’s original analytical choices are completely defensible. But dang it, in rigorous work, you use robustness checks to provide a range of where the threshold should be…and to preclude looking as though you have a scientifically proven law on your hands when what you really have is a model where your key choices inherently influence the outcomes.
I still stand by what I said yesterday. The researchers were lazy, the economic community was lazy, and the WashPo was lazy in citing this manuscript as though the percentages and ratios cited therein represented some meta-analytical finding resulting from careful aggregation of decades of work. It’s intellectually sloppy, and it’s counterproductive to real policy inquiry and democratic dialogue.
RRRRrrrrrrrrRRRRRR. Like empirical research into macro and macro policy AREN’T HARD ENOUGH, PEOPLE.
Attention conservation notice: 2,000 words on why “impact” is over-rated. Of course, bean-counting deans and administrators think otherwise. While the media cavorts over “an Excel error”, I want to talk about unconventional weighting and cherry picking the data, cherry picking papers to treat as definitive, and why working at a policy school causes me despair. To quote Hemingway: “There are some things which cannot be learned quickly and time, which is all we have, must be paid heavily for their acquiring.”
Paul Krugman’s first of many posts on the topic gives a nice explanation of the Rogoff-Reinhart dealio–there is much that drives me crazy about Paul Krugman, but you can’t complain that he doesn’t know how to explain economics to a broad audience–because he most certainly does. The deal goes something like this: There are two recent macro papers that have purported to provide the empirical basis for tax cuts to produce growth. One is by Alberto Alesina and Silvia Ardagna:
Large Changes in Fiscal Policy: Taxes versus Spending, Alberto Alesina, Silvia Ardagna, in Tax Policy and the Economy, Volume 24 (2010), The University of Chicago Press
This paper is in a much less influential journal than the one that’s causing all the kerfuffle, which is this one:
Rogoff, Kenneth, and Carmen Reinhart. (2010) “Growth in a Time of Debt.” American Economic Review 100.2: 573–78
American Economic Review is the gold standard of econ journals. Mike Konczal at the Next New Deal blog summarizes the paper:
In 2010, economists Carmen Reinhart and Kenneth Rogoff released a paper, “Growth in a Time of Debt.” Their “main result is that…median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower.” Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact.
This has been one of the most cited stats in the public debate during the Great Recession. Paul Ryan’s Path to Prosperity budget states their study “found conclusive empirical evidence that [debt] exceeding 90 percent of the economy has a significant negative effect on economic growth.” The Washington Post editorial board takes it as an economic consensus view, stating that “debt-to-GDP could keep rising — and stick dangerously near the 90 percent mark that economists regard as a threat to sustainable economic growth.”
Oh, yeah. That’s what economists say. All of the smart ones, right. Reinhart and Rogoff’s paper *from 2010* is so scientifical in the minds of WashPo editors that it’s now economic consensus. Mmmmmkay.
But that’s not how social science works. It takes time, and a lot of subsequent study, to find a result we should treat as definitive. But that isn’t what politicians or the public want to hear. And…it’s so very, very tempting to give the people what they want. It’s one way you get to the Kennedy School.
Well, what’s wrong with that? A great deal, it turns out, both in terms of the original paper’s content, methods, and conclusions. The story becomes ugly pretty fast–though not surprising to those of us who watch influence peddling/pandering happen all day every day in the policy analysis machine of academic life, in which Harvard is to the academy what Google is to search engines–there is only one in the minds of most people; most people are too lazy to use more than one search engine, and why would you when that one gives you what you want with so little effort?
After quite some nagging, apparently, Thomas Herndon (a PhD student in Econ), Michael Ash, and Robert Pollin, all researchers at the University of Massachusetts Amherst finally got Rogoff and Reinhart to share their data after trying to replicate the results unsuccessfully with data compiled themselves. When the UM researchers tried to replicate the findings with Rogoff and Reinhart’s numbers, they discovered a systematic coding error that, when corrected, shows the original conclusion–that tax cuts were expansionary during times of debt–was simply not supported by the extant data or the subsequent analysis.
Which makes me wonder about the AER as the gold standard. First, I thought you always had to share your data to get into AER, and I thought reviewers were supplied WITH YOUR DATA at the time they review. I’ve had to do that for some of the journals I’ve published in. That’s what a gold standard looks like to me. Am I missing a part of the story here?
The media, of course, is eating this up, largely because there is a delicious David versus Goliath aspect to the review and the chance that Hahhhhhvard folks might be wrong and a wee graduate student right. I strongly suspect that if this were an assistant professor at Princeton the finding would have been largely ignored in media because it would seem like academic in-fighting instead of the sexy, aw-shucks, disempowered-grad-student-makes-good story it is. I’m waiting for the next iteration of the story–or the Hollywood version–about how some meanypants proffie tried to steal credit for this brilliant result, but young economics stud pulled out an AK-47 during a research meeting while his faithful, brilliant-but-not-as-brilliant-as-he-is girl leans on his masculine shoulder.
I’m sounding a little bitter, which I am actually not, about the review and the success it bestowed upon a graduate student. The attention is a good thing, and it’s wonderful to see a young person do a replication study and get so much impact out of it–usually, replication studies are treated with less respect than they deserve. Again, this is a problem with the academy. Why do careful replication studies if the point is to be out there chasing your own Freakonomics/WOWEEZOWEE LOOKYHERE moment. But I am annoyed at the way the whole thing is being has been discussed in the media, as though this review strikes down the whole hypothesis that tax cuts might foster growth when government indebtedness is at stake.
It doesn’t. There is another paper out there, for one thing, and for another: did I not just say that social science doesn’t work like that? Yes, there are seminal papers, but it takes a long time to get to the point where we can truly call something ‘seminal.’ As usual, Richard Green and Mark Thoma have the real deal analytical problems of sussing out this question. Richard Green is here in Forbes, discussing the particulars of this particular, thorny, empirical question. Mark Thoma has well-reasoned insights about the larger problems in macro over at Economist’s View.
Krugman is careful to point out that you can’t conflate the problems with the AER paper with the subsequent, high-profile book: This Time is Different: Eight Centuries of Financial Folly.
But I kind of can–and here’s why. For all the media froth about the coding error, which is pretty bad when we are talking AER level, there two other issues raised in the Herndon-Ash-Pollin study that are straight up signs of analysis-fiddling to get the results you want. What are they? Michael Konczal explains the issues way better than I can:
Selective Exclusions. Reinhart-Rogoff use 1946-2009 as their period, with the main difference among countries being their starting year. In their data set, there are 110 years of data available for countries that have a debt/GDP over 90 percent, but they only use 96 of those years. The paper didn’t disclose which years they excluded or why. [Emphasis mine: WTH AER????]
Herndon-Ash-Pollin find that they exclude Australia (1946-1950), New Zealand (1946-1949), and Canada (1946-1950). This has consequences, as these countries have high-debt and solid growth. Canada had debt-to-GDP over 90 percent during this period and 3 percent growth. New Zealand had a debt/GDP over 90 percent from 1946-1951. If you use the average growth rate across all those years it is 2.58 percent. If you only use the last year, as Reinhart-Rogoff does, it has a growth rate of -7.6 percent. That’s a big difference, especially considering how they weigh the countries.
Unconventional Weighting. Reinhart-Rogoff divides country years into debt-to-GDP buckets. They then take the average real growth for each country within the buckets. So the growth rate of the 19 years that England is above 90 percent debt-to-GDP are averaged into one number. These country numbers are then averaged, equally by country, to calculate the average real GDP growth weight.
In case that didn’t make sense let’s look at an example. England has 19 years (1946-1964) above 90 percent debt-to-GDP with an average 2.4 percent growth rate. New Zealand has one year in their sample above 90 percent debt-to-GDP with a growth rate of -7.6. These two numbers, 2.4 and -7.6 percent, are given equal weight in the final calculation, as they average the countries equally. Even though there are 19 times as many data points for England.
Now maybe you don’t want to give equal weighting to years (technical aside: Herndon-Ash-Pollin bring up serial correlation as a possibility). Perhaps you want to take episodes. But this weighting significantly reduces the average; if you weight by the number of years you find a higher growth rate above 90 percent. Reinhart-Rogoff don’t discuss this methodology, either the fact that they are weighing this way or the justification for it, in their paper [Again, emphasis mine, and again WTH AER????]
Keep in mind that every_single_day at USC I have AER shoved in my face as the holiest of all that is holy when it comes to scholarly rigor, and HOLY SCREAMING MEEMIES, bunnypants, there are three big, honking things here that should have come out in peer review. First, how do you get away with not disclosing which countries you are leaving out? And second: how do you get away with not explaining your weighting? And why didn’t anybody demand to see the consequences of these major analytical choices in robustness checks? These are not excel errors. These are not esoteric things that only economists can understand. These are basics of modeling research.Read More »