My father passed away on Sunday. Here is his obituary: exactly like he was. Understated, straightforward. I won’t mess that up by adding.
It’sthe 6th anniversary of the shootings at Virginia Tech. We remember.
And then, beautiful Boston.
You know those times when you think you are the crazy one, and then suddenly you find out you’re not?
So for some time, I’ve been reading public health and medical journals and rolling my eyes at what is getting published, while my senior faculty lecture me on quality of research and impact factors. Well, these medical journals have huge impact factors, and they routinely publish terrible policy research. My attitude has been–hell, I can produce bad policy research as well as these people.
Yesterday, one of my wonderful PhD students directed me to this brilliant blog post from aid watch: shaky research to solid headlines via medical journals:
We could go on and on with examples. The British Medical Journal published a study of mortality of age cohorts in five year bands for both men and women from birth to age 95 for 126 countries—an improbably detailed dataset. (The article was searching through all the age groups to see if any group’s mortality was related to income inequality.). Malaria Journal published a study of nationwide decreases in malaria deaths in Rwanda and Ethiopia, except that the study itself admitted that its methods were not reliable to measure nationwide decreases (a small caveat left out later when Bill and Melinda Gates cited the study as progress of their malaria efforts).
The Lancet published a study that tested an “Intervention with Microfinance for AIDS and Gender Equity (IMAGE)” in order “to assess a structural intervention that combined a microfinance programme with a gender and HIV training curriculum.” The conclusion: “This study provides encouraging evidence that a combined microfinance and training intervention can have health and social benefits.” This was a low bar for “encouraging:” only 3 out of the 31 statistical tests run in the paper demonstrate any effects – when 1 out of every 20 independent tests of this kind show an effect by pure chance. (The Lancet was also the culprit in a couple of the links in the first paragraph.) Economics journals are hardly foolproof, but it’s hard to imagine research like this getting published in them.
Go read the whole post–it’s worth it for the clever graphic.
I got addicted to reading the blog last night, so I’m signing up for the feed. Really good stuff.
The Journal of Statistical Software has a new manuscript up on the estimation of spatial differences in disease risk, using a package called “sparr”. You can download the manuscript for free.
Davies, T.M., Hazelton, M.L. & Marshall, J.C., 2011, sparr: Analyzing Spatial Relative Risk Using Fixed and Adaptive Kernel Density Estimation in R, Journal of Statistical Software, 39(1), pp. 1-14.
The estimation of kernel-smoothed relative risk functions is a useful approach to examining the spatial variation of disease risk. Though there exist several options for performing kernel density estimation in statistical software packages, there have been very few contributions to date that have focused on estimation of a relative risk function per se. Use of a variable or adaptive smoothing parameter for estimation of the individual densities has been shown to provide additional benefits in estimating relative risk and specific computational tools for this approach are essentially absent. Furthermore, little attention has been given to providing methods in available software for any kind of subsequent analysis with respect to an estimated risk function. To facilitate analyses in the field, the R package sparr is introduced, providing the ability to construct both fixed and adaptive kernel-smoothed densities and risk functions, identify statistically significant fluctuations in an estimated risk function through the use of asymptotic tolerance contours, and visualize these objects in flexible and attractive ways.
The key contribution here comes in the asymptotic tolerance contours. Still using bootstrapping, but the adaptive kernel allows us to estimate appropriate bandwidths from the actual data. Very cool.
There are numerous potential applications of the method other than disease mapping. One of my brilliant colleagues, for example, might benefit from the discussion in his research on retail subcenters.