Science has a negativity problem
Exciting new discoveries get all the attention — leaving just-as-important negative results in the dust. And fixing the problem is easier said than done.
Dan Robitzski • July 10, 2017
It may come as a shock, but the vast majority of science, such as this depiction from “The Man From U.N.C.L.E.” is particularly unsexy. [Image credit: Wikimedia Commons | Public Domain]
Quick — think of a famous scientist! Odds are you pictured someone like Einstein or Galileo. Or maybe you imagined some generalized concept of a scientist: a dork in a lab coat and goggles shouting “Eureka!” when the chemicals in their test tubes change color.
What those scientists have in common — aside from funky hairdos — is that they all discovered something monumental. Through them, science sounds exciting, active. They make it seem like to do science is to constantly unearth new truths. However, a large fraction of scientific studies does just the opposite, finding nothing at all. And even though you don’t hear about those so-called “negative” studies, they’re just as important as the big discoveries. Perhaps even more so.
These preconceptions convey the idea that the purpose of science is to work towards the next breakthrough. After all, the truth is out there, we need only develop an experiment to find it! However, the purpose of science is really to develop a hypothesis — an idea about how the world works — and then try as hard as possible to disprove it. If a scientist fails to disprove their own idea, then they may be onto something new. But scientists often succeed at disproving their ideas, and there’s currently a stigma that would describe those studies as failures.
I experienced this celebration of positive results during my own brief stint in academic science. As first-semester neuroscience majors, my classmates and I were assigned projects in the laboratory component of a psychology course. The goal was to develop experiments to test whether telling people that their demographic would do worse at a given task would influence their performance. Every group except for mine — which tested whether blondes would give up earlier than brunettes on an impossible puzzle if we told them that we expected brunettes to perform better — found that there was no such effect. But my group found something — after we ran our numbers we saw the drop in our participants’ performance that we were looking for.
Guess which group’s research my professor wanted to present in a poster. Never mind that the vast majority of experiments found that there was no effect — when you frame science as succeeding or failing to find something, it suddenly becomes logical to ignore the negative results and focus on the positive findings.
Those who do science face tremendous pressure to find something — a new discovery, a previously-undetected phenomenon or some finding that they can turn into an academic paper. Often, their ability to secure grants and other funding — things that allow scientists to conduct further research while affording luxuries like food and rent — depend on it. The academic journals to which they submit their research almost exclusively favor these positive results, which are given that name because they indicate that the researcher found something, not because they’re superior. Conversely, the journals almost always reject studies with negative findings, or research that doesn’t discover something new but rather verifies that there is no difference between the two different conditions that were tested.
When I worked at Brown University’s bat lab, the other neuroscientists and I realized that we needed to find a way to publish our ongoing behavioral studies. What had been a long, ongoing study on the echolocation patterns of bats searching for food turned into a pithy abstract on how each of three bats used unique strategies — even though I feel our work was valuable, that was what we could distill into the frame of an exciting new discovery. All of our other data, like my long-term analysis of bat metabolism and feeding patterns, was interesting and useful within our lab, but not exciting to the outside world.
Where did this pressure come from? “I think it’s a little bit from all angles. There’s no one force trying to suppress negative findings, it’s the attitude of the field in general,” says Susanne Brummelte. Brummelte is a developmental psychobiologist at Wayne State University. She’s also on the editorial board of the Journal of Negative Results in BioMedicine, an academic journal committed to providing scientists with a place to publish their negative results. Brummelte feels that the journal is doing its part, but many scientists — who are strongly incentivized to find the next big discovery — still view negative result-specific journals as a last resort for publishing their work.
Part of the problem, argues Brummelte, is that it’s difficult to distinguish between a study that reveals genuine negative results versus a flawed experiment or one in which the scientists made an error. Without going back and funding replication studies to verify or disprove a researcher’s findings, there’s really no way to tell the difference. To fix the system, Brummelte thinks that the National Institute of Health — or whoever else funded a given experiment — needs to provide grant money for studies that seek to replicate the findings of other research. “If you can’t go back to compare because you don’t have funding, you’ll never know whether you truly have negative results,” she says.
There’s no way around the fact that positive results are more interesting – to the scientists, to the journals and to the press. Unfortunately, this bias in publishing changes how the world sees science: the published positive results become the current understanding of a subject. The negative results are largely ignored.
But there are more serious things at stake than inaccurate representation of science and how the world works. Negative studies about drugs taken by people around the world every day are just as unlikely to be published. This distorts how those drugs are perceived by the U.S. Food & Drug Administration (FDA) that approves and regulates them and by the doctors that prescribe them. “If you find that a treatment doesn’t work and you don’t publish it, that won’t help anyone,” says Brummelte.
“There’s this clash between what is really going on and what the papers have been telling them,” says Dr. Erick Turner, a psychiatrist at Oregon Health & Science University who served as a whistleblower of sorts after leaving his job as an FDA reviewer. He used to be one of the people who decided whether or not a new drug should be approved for marketing and consumption.
Dr. Turner mentioned that when he was a practicing psychiatrist before he took a job at the FDA, he — like most people — believed that academic journal articles represented “the truth with a capital T, and the more prestigious the journal the more truthful it must be.” Turner says that after he started talking about negative results in clinical trials, many of his colleagues said they had never heard of such a thing.
In 2008, Turner published a now well-known paper in the New England Journal of Medicine that described the problem of selective publication — that is, only publishing the positive results that make a new drug seem effective — and how that changes reviewer and practitioner perceptions. In his paper, Turner demonstrated that almost a third of all the FDA-sanctioned studies on selective serotonin reuptake inhibitors, commonly known as SSRI antidepressants, were unpublished. Of the unpublished research, all but one study contradicted the findings that got SSRIs like Prozac — which has been prescribed to tens of millions of Americans — approved in the first place.
Meanwhile, a bird’s eye view of SSRI research reveals that the drugs only benefit about half the people who take them. The other half of the population got worse.
“The drug company has what they think is a sound study, but then oftentimes [their drug] doesn’t beat placebo,” says Turner.
How do they get away with this? Turner says that the drug companies that fund this research frame the negative results — after the fact, mind you — as so-called “failed studies.” Rather than accepting that there’s no benefit to the drug, the researchers will retroactively decide that they made an error in their research design that can explain away their lack of positive results.
Never mind that the research was considered sound and thorough until that point.
This means that the regulatory institution that’s supposed to keep us safe is only as powerful as the science laid before it. The FDA’s decision to approve a drug is dependent on the body of research it evaluates, over 85 percent of which is funded by the pharmaceutical companies trying to bring their drugs to the market.
Since we as a society like our science to be clear cut and exciting, the medications on which we rely could have been deemed safe and effective based on crappy and misleading science. At best, that means that some drugs might be useless. At worst, they could be outright dangerous.
In this context, the negative results can be even more important than the positive studies. Either way, they must be present to better contextualize any new findings about a drug. Without knowing how often the medication fails, any statistics about the studies that “succeeded” become meaningless. “All negative results should be published and positive results should be published only sparingly,” said John Ioannidis, an epidemiologist who researches preventative medicine at Stanford University.
Rather than trying to publish their important negative findings, scientists will often search for explanations as to why they didn’t reach the expected outcomes for their studies.
These researchers justify their “failed” verdict by testing their drug against an existing drug that supposedly does the same thing in addition to testing against an inert placebo. They will point out that not only did their drug fail to out-perform a placebo treatment, but so did the drug that’s already on the market. Clearly there must be some mistake.
“They say Prozac didn’t beat placebo either so there’s a problem,” says Turner. But even for Prozac, he explains, roughly half the trials didn’t yield positive results either.
Even Turner, who brought the problem of unpublishable negative results to public light, is not immune to the pressure to find something new in his work. Back in 2012, he conducted a follow-up study, this time focusing on the same problem around antipsychotic medications. “The results were not as striking as I had with antidepressants,” Turner says. “I sent it to the New England Journal [of Medicine] and they promptly rejected it.”
It turns out that the New England Journal of Medicine refused to publish Turner’s exposé of unpublished negative results because the results were too negative.
So, what can we do about this? How can we improve the system so any research of sufficient scientific rigor can be published? As with any other drive for change, we need to follow the money. While the pressure for newsy, splashy developments comes from all over — journals, organizations that fund research such as the National Institute of Health, the press, and academia — the change will need to happen from the top down. If scientists just try to publish more negative results, they’ll still just be rejected.
Turner argues that journals will need to change the way they select research for publication. He cites what’s called the register-reports model that has started to gain some traction in reputable journals such as Science and PLOS ONE. The idea is that researchers submit their protocols — what they intend to study and exactly how they intend to do it — before they conduct their experiment.
This way journals can evaluate research and make a commitment to publish research based on quality rather than results — they can make a decision on whether or not the study represents good, thorough science.
This arrangement, which Turner sees as “the purest approach” is a bit idealistic, so he proposes a compromise. Scientists, he notes, are used to the standard route: they get their grant, do their study and then, only after they’ve completed their experiment, they begin to seek out a place to publish.
Turner’s more feasible solution would be to have researchers submit just their protocol — everything that was written out before the study was conducted — and then once the journals have deemed the study to be well designed, they send over the results. That way, “they can’t go back and change their minds and say ‘oh, now I see the flaw,’” says Turner.
But Susanne Brummelte thinks that there’s more of a philosophical hurdle inherent to the nature of negative studies. “It’s tough,” she says, “because if you don’t think there’s a difference [between the variables or conditions tested in a clinical trial], why are you doing the study?”
Scientists set up their experiments with the goal of finding something new, she says. It’s harder to justify conducting an experiment when they don’t expect to see any results. And if they’re trying to find that new discovery, how do we know that they didn’t just make an error?
This is the main problem that she faces as an editor at the Journal of Negative Results in Biomedicine. The review process for vetting the research that gets submitted is sometimes more rigorous than that of a traditional journal, she says, “because you really need to prove that you did the study right; that you didn’t just mess up.”
Brummelte feels that to fix the system, grants need to be given for retroactive research. Until you go back and try again, it’s impossible to tell whether scientists found true negative results or if there was some error along the way. This technicality is where the logic of so-called failed experiments came from.
“You have to somehow distinguish between replication studies that fail to find a difference when there are 20 studies that show an effect compared to studies that look at something new and hasn’t yet been proven to have an effect,” Brummelte says.
There’s a long way to go, but both Brummelte and Dr. Turner see cause for optimism. Turner points out that there’s concrete progress being made in the form of increased research and meetings being conducted on negative research. “Definitely it’s an attitude change that needs to happen in a way, and it’s starting to happen,” says Brummelte. “But we’re not there yet.”