Understanding the Sunshine Clause

Sometimes we fail. Sometimes we try to do something and it doesn’t work for a variety of reasons. When that something has never been done before, we should give ourselves a little slack, right? After all, how many times did Edison try to make a light bulb? Adults have been telling children for eternity that it is more important to learn from our mistakes. While this may be obvious to us in our personal lives – try cooking a new recipe with new ingredients using new tools on a new stove – it can be harder to see in the professional and academic sphere. Especially when it comes to scientific publications and results, it can be hard to take a step back, look at them with a critical eye, and still see validity in the failures with their inherent flaws and biases. Or is it?

One of these flaws you should be aware of is what we call the sunshine clause, otherwise known as positive-results bias.

Essentially, this type of publication bias happens because, understandably, authors are more likely to submit positive results compared to negative or inconclusive results. Likewise, editors are more likely to publish this kind of finding because they are often more exciting, more novel, or more interesting to the audience paying for the subscription. In fact, statistically significant results have been shown to be three times more likely to be published compared to papers with null results.

What this means is that studies in which a hypothesis is supported are more likely to be published and disseminated than those in which the hypothesis is rejected or the research “failed” in some way – for a myriad of possible reasons.

So, What’s the Big Deal?

Why does it matter that some results are more likely to be published than others? Well, there is both a theoretical and a practical answer to this question.

Theoretically, science – and let’s face it, humankind – is better off when all information is discovered, shared, and considered. Science and research is humankind’s attempt at uncovering natural truths, of discovering the new and unknown. If only the “successful” research is published, our idea of truth is not a reflection of reality. The presence of biases, like the sunshine clause, clouds the waters of truth regarding what information is out there. It distorts the lens through which we see things because it makes positive findings proportionally more represented in the pool of knowledge from which we make decisions.

Science is at its best when all findings, no matter how seemingly trivial or insignificant, have equal weight placed on them. Science ought to regard truth as truth – both the truth in positive findings and equally the truth that can be found in negative findings.

Publication biases present practical concerns, as well. For instance, bias distorts the results of meta-analyses and systematic reviews. A meta-analysis is a statistical analysis that combines the results of multiple scientific studies into one. These are approaches that attempt to interpret scientific data in order to critically appraise research studies and then synthesize their findings. However, if some of the data is missing because it wasn’t published, then that analysis can quickly become less reliable.

Moreover, it’s not just any data that’s missing – it’s a certain type of data – data from “failed” research. This means that our analysis will be asymmetrical in its conclusions and, more importantly, we can’t learn from those mistakes. Since the published studies tend to point in one direction rather than another, the conclusions we draw from them will be heavily weighted toward that side, even if that’s not what the full body of research actuality suggests.

Why Does This Happen?

Before we can figure out what we can do to combat biases like the sunshine clause, we first need to get to the root of the issue. We need to identify the reasons why this happens at all. Why is a paper with a positive result more likely to enter the literature instead of those with negative results?

There are some statistics-based reasons, such as smaller effect sizes, smaller fields of study, and a greater number and lesser preselection of tested relationships. Sometimes, these biases are a consequence of the methods of experimentation themselves but that doesn’t mean we can’t or shouldn’t learn from them. Even with the statistical reasons mentioned above, there are insights that can further the research topic – if researchers can have access to that failed research.

At other times, though, biases result from subconscious desires, aversions, and the like. Failed research may not get published because of real human desires, especially those due to financial considerations – positive results are closer to commercialization. Or these discrepancies may be a result of experimenter bias, white coat bias, or the simple fact that results with statistical significance garner more attention from readers.

What Do We Do About It?

Now comes the million-dollar question: what in the world can we do about it? The answer to this question is two-fold. There are remedies the scientific community can take to address this issue and there are steps you can take to account for the existence of these biases in your own work.

One way the scientific community can address the sunshine clause is through better-powered studies. These are large studies that give clear results, test large concepts, and ensure a low-biased meta-analysis. Basically, scientists can expand the breadth and scope of their research to account for some of the statistical biases we cited above.

Likewise, enhanced research standards can be used to similar effect. This means that researchers could register their data collection, pre-register their protocols, and adhere to established protocols. Following these rigid methodologies can reduce statistical bias and even help to uncover psychological biases that may affect the research.

Readers can also demand that the sunshine clause is addressed by their favourite journal. Write a letter to the editor and ask how often they published “failed” research. Start demanding the other side of the story.

When it comes to what you can do about it, on the other hand, it really comes down to thinking critically and cultivating awareness. The first step in the battle of defeating publication bias is knowing that it exists. Whenever you hear about or read new research, ask yourself some critical questions about it. For instance, you should question what research was left off the table to make room for this research, and what the existence of that research would mean for the presented results. You can also ask how a paper with null results would change the conclusions you want to draw from this research.

The best thing you can do is to understand that these biases exist. Just like you would double-check a story from a news publication, you should likewise seek confirmation of scientific results from multiple journals. Editors are just as biased as researchers, or anyone else for that matter, so getting input from a variety of publications can help you to eliminate the effects of their biases in your own thinking.

Though we may never be able to completely eliminate positive-results bias, we can take steps, both in consumer awareness and in research methodologies, to cut it back and lessen its effect.