Young (and perhaps many not so young) scientists often assume that most of what they read in the literature reflects real and well-established phenomena. But a recent analysis of scientific studies in neuroscience, which was published online inearlier this month, urges caution both in reading the literature and in designing your own experiments. Neuroscience, the authors claim, suffers from a dearth of statistical power.
“Time and resources often prevent us from performing large well-powered studies but as long as the limitations of the study are borne in mind when interpreting the findings then the conclusions will not be misleading.” —Katherine Button
Studying 49 neuroscience meta-analyses published in 2011, the researchers find that the median statistical power of the 730 included studies was 21%.
But the problem is not just that experiments are missing interesting effects. Over the last 50 years, neuroscience has seen an “increase in research flexibility and the complexity of study designs combined with the stability of sample size and search for increasingly subtle effects,” the article’s authors write. Such a combination “has a disquieting consequence: a dramatic increase in the likelihood that statistically significant findings are spurious.” Low statistical power can also artificially amplify the magnitude of a true effect, and low-power studies are more likely to be affected by biases, such as the choice of statistical model, selective publication of outcomes, and inferior study design. There’s an ethical dimension, too, beyond decisions about how many animals to use in an experiment. “[U]nreliable research is inefficient and wasteful.”
Early-career researchers in neuroscience and other biological fields are not adequately aware of the issue, Button writes in her e-mail: “Research methods and statistical inference are key to the current model of bioscience research but their importance is not reflected in the time dedicated to their teaching in undergraduate courses.” A lot is at stake. “Ignorance of basic methodological principles leads to poorly designed research and misleading conclusions. It undermines the whole point of scientific investigation.”
More than an eye opener, the article offers practical advice on how to overcome some of these challenges, almost all involving increased openness and disclosure. More disclosure of how data were handled and reported, and making data available, can help other scientists spot false positives in your work. Making your materials available can facilitate efforts to reproduce your findings. And collaboration can allow data to be combined for larger sample sizes, a strategy that human genetic epidemiology has successfully used to boost the reliability of results in that field.
In her e-mail, Button advises young scientists to “[t]hink hard about the assumptions underlying [their] research design and be up front about any limitations. Time and resources often prevent us from performing large well-powered studies but as long as the limitations of the study are borne in mind when interpreting the findings then the conclusions will not be misleading.” That may mean accepting that your findings are less exciting, she adds. “It is difficult as a young researcher balancing the pressures of a ‘publish or die’ culture with sound research practices, as these inevitably take longer and produce more measured conclusions. But I suppose it depends on whether your career aim is to … add genuine insight to human knowledge or confuse things by adding more false-positives.”