The Sokal hoax was trash and this latest hoax is even worse

And I really, really wish the media would stop indulging them.

I remember when Donald Shoup gleefully handed me Sokal and Bricmont’s Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science (1997), the authors’ self-congratulatory description of a hoax paper they got published in Social Text. Shoup was shocked when I handed it back to him with margin notes (in post-its) all over it, disputing Sokal and Bricmont’s arguments. Shoup was aghast at my presumption–he often is. But I was right.

Academic hoaxers want to show us all their intellectual superiority, and the superiority of their fields, particularly the sciences, over social theory and social science, by generalizing about entire fields from an N of 1–their hoax. Now THERE’S rigorous thought for ya.

Sure, an academic hoax can be a valid case study, but authors–and the media–don’t treat these hoaxes like case studies. Sokal’s was not carefully designed nor documented. It was just new at the time he did it, and people enjoyed tittling at scholars’ expense. It was bad research and experimentation, however.

Yeah, a single experiment in physics can be definitive, but the key is in the research design; it has to be replicable. And instances may be wonderful learning opportunities, but generalizing from them is the first things you’re told NOT to do in social science school.

But, hey, social theory sucks, because Sokal said so, and he wanted to punch down, and he took advantage of a gracious editor’s desire to be inclusive of a scientist in science studies. Thus Sokal made a name for himself in pop culture he was never going to get in physics, due to the media delight over this hoax. Because if there is one thing that Americans love and have an insatiable desire for stories about, it’s punching the humanities and liberals arts.

And it’s even better–so much better– if you can punch at humanities crafted by lady professors or professors of color. Because if there is one thing that we really, really love more than crapping on the humanities, it’s crapping on the idea that women and people of color might know things, or that *people like them* are critically examining systems of power. It’s not enough that women’s studies and black studies often consist entirely of part-time faculty who have diddly squat in terms of either public investment or big, fat donors, we also need to score points off them in the media to advance our careers.

And hence this latest academic hoax: Boghossian, Peter; Lindsay, James. “The conceptual penis as a social construct: a Sokal-style hoax on gender studies”. Skeptic. Retrieved 20 May 2017.

No link, because screw getting them clicks. Getting a terrible paper published in a pay-to-play open access journal, as these authors do, tells us precisely nothing other than the people behind the journal want your $$$. MMMMMokey.

Mediawise and careerwise, however, this is genius-level trolling, really, If Boghassian doesn’t get tenure, he can scream that it was because he wasn’t “politically correct” about gender studies and then he can, like Naomi Schaeffer Riley, become a conservative media darling based on this stuff. If this hoax is any indicator, Boghassian is a great media manipulator and a sloppy scholar, which is one very likely reason he wouldn’t get tenure. But if that happens, he’s got this nice fallback claim that he is being discriminated against.

Timothy Burke is one of my favorite academic bloggers. He teaches at Swarthmore, and his takedown of Boghassian and Lindsay is worth quoting at some length:

Dear friends, have you ever felt after reading an academic article that annoyed you, hearing a scholarly talk that seemed like nonsense to you, enduring a grant proposal that seemed like a waste of money to you, that you’d like to expose that entire field or discipline as a load of worthless gibberish and see it kicked out of the academy?

You probably didn’t do anything about it, because you’re not an asshole. You realized that a single data point doesn’t mean anything, and besides, you realized that your own tastes and preferences aren’t really defensible as a rigorous basis for constructing hierarchies of value within academia. You probably realized that you don’t really know that much about the field that you disdain, that you couldn’t seriously defend your irritation as an actual proposition in a room full of your colleagues. You realized that if lots of people do that kind of work, there must be something important about it.

Or maybe you are an asshole, and you decided to do something about your feelings. Maybe you even convinced yourself that you’re some kind of heroic crusader trying to save academia from an insidious menace to its professionalism. So what do you have to do next?

Here’s what you don’t do: generate a “hoax” that you think shows that the field or discipline that you loathe is without value and then publish it in a near-vanity open-access press that isn’t even connected to the discipline or field you disdain. This in fact proves nothing except that you are in fact an asshole. It actually proves more: that you’re a lazy asshole.

Now, I’m not likely to call an assistant professor like Boghassian an asshole, but I am willing to call him lazy. If you actually want to test the hypothesis that any garbage can get published just because it’s got gender in the title, then there are ways to try to get at that, but those ways are *hard*.

How to do such a study in a way that isn’t laughable:

1. Develop a rigorous analytical framework for judging what counts as “easy” or “hard” reviewing. As it is, we have to take these authors’ word for the fact that their process through “peer review” was easy. Saying that reviewers at a pay-to-publish journal weren’t hard on you is a) hardly surprising and b) unverifiable. Hard as compared to *what*?

No.

In order to make claims about the reviewing process, they’d need to target multiple journals (more on this below) at a variety of impact/submission rejection rate levels, and they would need to code the reviews in a consistent, valid way–in a way that shows they have been internally consistent with evaluating comments from journals. Tim Burke is right: you need to go after the premier journals in a field if you want to make claims about the field. I have reviewed for Signs, for example, one of the top feminist journals. Every five years or so, they get a paper here and there about women in cities; I don’t recall how many I’ve done–three I think–but I do know the journal has rejected every paper that they have had me review. Doesn’t sound like a no-brainer universe to me.

2. Use controls. Send it to a similarly-sized subfield. No, you can’t send it to economics. Top journals in fields with thousands of practitioners can cream off the best–that is hardly surprising, nor is it an indicator of importance. If you are testing feminist philosophy then test garbage papers there with garbage paper in something like “history of medicine.” I’m not sure that’s the right comparison–but again, it’s not my job to work through the control since I’m not an asshole, and I don’t have an ax to grind. The control field should be a subfield of similar size, only without the supposedly “extreme ideological leanings” of gender studies. Then code and track reviews and outcomes across fields. Systematically, according to the framework.

How else do you isolate the “ideology”? It’s possible that a bullcrap paper in a supposedly nonideological field could slip through the peer review process. We’d need to show that gender studies differs in a measurable way.

One control should be something likely to show ideology, too, like a libertarian journals or some such. That way you can tell if gender studies is more guilty than other “xtreme” ideologies or sloppiness for passing papers along simply because it uses the right buzzwords and takes the right tone.

3. Pre-test the papers/test instruments. Multiple controls means you need multiple bullcrap papers from various subfields, and you’d need to pre-test those papers to see if they all existed at a comparable level of bullcrappery. Yeah, that’s hard; you’d probably need to pre-test with Delphi panels and find some way to ensure they were consist.

Burke’s right: good academic work is hard. Cheesy hoaxes at vanity presses are not.

4. Develop a sufficiently large sample that you will get a good-sized corpus of review and editor text to analyze. Don’t know how many that would be. Since I bet you’d get quite a few desk rejects, this could involve some work.

5. Derive a way to operationalize the editorship variable. Editors are a big deal in journals; some are great, some are terrible, but all of them are a driving force in what gets published, what gets emphasized in reviews, and who gets the review to do in the first place.

Until somebody does something that even approaches this design, then I don’t want to hear it. There are all sorts of ways the research design above could go sideways, but again…do the damn work if you want to make claims about scholarly publishing.