Scientists tested hypotheses with experimental evidence (Popper 1934). This evidence is credited with the publication of research in scientific journals. The dissemination of scientific knowledge thus requires a publishing system that evaluates studies without systematic bias. Nevertheless, there is growing concern about the bias of publications in scientific research (Brodeur et al. 2016, Simonsohn et al. 2014). Such publications may be biased from the publishing system to penalize research papers with small effects that are not statistically significant. The resulting election could lead to a gap between biased assumptions and misleading confidence in published research (Andrews and Kasy 2019).
Large-scale surveys with academic economists
In a new research paper (Chopra et al. 2022), we examine whether there is a penalty for research studies with zero results in the publishing method and if so, if there is a mechanism behind the penalty. To address these questions, we conducted experiments with about 500 economists in the top 200 economics departments in the world.
Our sample researchers have rich experience as both producers and evaluators of academic research. For example, 12.7% of our respondents are associate editors of scientific journals, and the average researcher has an H-index of 11.5 and 845 citations in Google Scholar. This allows us to study how experienced researchers in economics evaluate research studies.
In the experiment itself, these researchers were given descriptions of four hypothetical research studies. Each description was based on an actual research study by economists, but we have corrected some details for the purpose of our experiments. The description of the study included information on research questions, experimental design (including sample size and control group averages), and the main research of the study.
Our main intervention is to change the statistical significance of the main research of a research study, keeping all other features of the study constant. We randomized whether the point assumption associated with the main search of the study was large (and statistically significant) or close to zero (and thus not statistically significant). Importantly, in both cases, we keep the standard error of the point estimate the same, which allows us to retain the statistical accuracy of the estimate.
How does the statistical significance of the main findings of a research study affect researchers’ perceptions and evaluations of research studies? To find out, we asked our respondents how likely they were that the research study would be published in a specific journal if it was submitted there. The journal was either a journal of general interest (e.g. Review of Economic Studies) Or a suitable top field journal (e.g. Journal of Economic Growth) In addition, we measured their perceptions about the quality and importance of research studies.
Is there a zero result fine?
We find evidence of sufficient perceived punishment against zero results. Researchers in our sample found that research with zero results was 14.1 percent less likely to be published in the study (in panel 1 of Figure 1). This effect coincided with a 24.9% reduction compared to the scene where the study in hand would have yielded a statistically significant result.
In addition, the researchers held more negative opinions about the study which gave a zero result (Panel B of Figure 1). Researchers in our experiment found that 37.3% of the standard deviation in this study was of low quality. Our respondents also rated studies with zero results for a standard deviation of less than 32.5%.
Does experience moderate the punishment to zero results? We find that the zero-result penalty is comparable to different groups of researchers, from PhD students to editors of scientific journals. This suggests that the zero-result penalty cannot be attributed to inadequate experience with the publishing process.
Figure 1 Punishment of zero result
Mechanism
Why do researchers perceive studies with results that are not statistically significant for exemptions in the publishing process? Additional features of our design allow us to examine three possible factors.
Communication of uncertainty
Can the way we deal with statistical uncertainty affect the size of the zero-result penalty? In our experiments, we cross-examined whether the researchers were given the standard error of the original search. P-Research related to checking whether the original search is statistically significant. The diversity of these treatments emphasizes that the academic community is inspired by a chronic concern P-The value of statistical significance and the test may contribute to the bias of the publication process (Camerer et al. 2016, Wasserstein and Lazar 2016). We see that the zero result penalty is 3.7 percentage points higher when reported with the original result P-Mann, thus proving that the way we communicate is in fact a matter of statistical uncertainty.
Priority for surprising results
Our respondents may find that the publishing process values studies with surprising results in literature. Indeed, Frankel and Casey (2022) show that it is best to publish surprising results if we want to maximize the policy impact of research published in journals. Such a mechanism could potentially explain the penalty of null results if researchers perceive a large penalty for studying zero results which is not surprising to experts in the field. To test this, we randomly provide expert predictions about the treatment effects of some of our respondents. We randomize whether experts predict a larger impact or predict an impact closer to zero. We find that the zero-result penalty remains unchanged when respondents are informed that literature experts have predicted zero results. However, once experts predict a major impact, the zero-result penalty increases by 6.3 percentage points. These patterns suggest that punishment against zero results cannot be explained by researchers who believe the publishing process favors surprising results, because they should have evaluated zero results which experts did not predict more positively in this case.
Felt statistical accuracy
Finally, we investigate the assumption that zero results can be taken as further noise estimates – even when the objective accuracy of the assumptions is kept constant. To test this hypothesis, we conducted an experiment with a sample of PhD students and early career researchers. The design and original results of this test are identical to our original test, but we replace the quality and importance questions with a question about the perceived accuracy of the original search. In this more junior sample of researchers we also find a large zero result penalty. In addition, we find that a standard deviation of 126.7% from the zero result has low accuracy, although we have corrected the respondents’ belief about the standard error of the original search (Panel B of Figure 1). This suggests that researchers may employ general heuristics to measure the statistical accuracy of the results.
Extensive effect
Our search has significant implications for the publishing system. First, our study highlights the potential value of pre-results review where research papers are evaluated before knowing the experimental results (Miguel 2021). Second, our results suggest that additional guidelines on the evaluation of studies that emphasize the informativeness and importance of zero results (Abadie 2020) should be provided to referees. The results of our research also have implications for communication. In particular, our results suggest that approaching statistical uncertainty of estimates in terms of standard error rather than p-values may reduce a penalty for null results. Our results contribute to a broad debate on the current publishing system (Angus et al. 2021, Andre and Falk 2021, Card and DellaVigna 2013, Heckman and Moktan 2018) and possible ways to improve the publishing process in economics (Charness et al. 2022).
References
Abadi, A. (2020), “Statistical meaninglessness in empirical economics”, American Economic Review: Insights 2 (2): 193–208.
Andre, P. and A. Fuck (2021), “What is worth knowing in economics? A Global Survey of Economists “, VoxEU.org, 7 September.
Andrews, I and M. Casey (2019), “Identification and Correction for Publishing Bias”, American Economic Review 109 (8): 2766-94.
Angus, S, K Atalay, J Newton and D Ubilava (2021), “Editorial boards of major economic journals show high institutional density and modest geographical diversity”, VoxEU.org, 31 July.
Brodeur, A, M Lé, M Sangnier and Y Zylberberg (2016), “Star Wars: The Empirics strike back”, American Economic Journal: Applied Economy 8 (1): 1-32.
Camerer, CF, A Dreber, E Forsell, TH Ho, J Huber, M Johannesson, M Kirchler, J Almenberg, A Altmejd, T Chan, E Heikensten, F Holzmeister, T Imai, S Isaksson, G Nave, T Pfeiffer, M Razen and H Wu (2016), “Evaluating Laboratory Test Replication in Economics”, Science 351 (6280): 1433–1436.
Card, D and S. Delavigna (2013), “Nine Information About the Top Journals in Economics”, VoxEU.org, 21 January.
Charness, G, A Dreber, D Evans, A Gill and S Toussaert (2022), “Economists want to see changes in their peer review systems. Let’s do something about it ”, VoxEU.org, 24 April.
Chopra, F, I Haaland, C Roth and A Stegmann (2022), “The Null Result Penalty”, CEPR Discussion Paper 17331.
Frankel, A and M Casey (2022), “Which searches should be published?”, American Economic Journal: Microeconomics 14 (1): 1-38.
Hackman, J. and Moktan (2018), “Publishing and Promotion in the Economy: Top Five Oppression”, VoxEU.org, 1 November.
Miguel, E (2021), “Evidence of research transparency in economics”, Journal of Economic Perspectives 35 (3): 193-214.
Popper, K (1934), The logic of scientific discoveryRoutledge.
Simonsohn, U, LD Nelson and JP Simmons (2014), “p-curve and effect size: modifying for publication bias using only significant results”, Perspectives on psychological science 9 (6): 666-681.
Wasserstein, RL and NA Lazar (2016), “ASA Statement on P-Values: Context, Process and Purpose”, American statistician 70 (2): 129-133.