From today on through September 2023 the Journal of Politics offers a new article type: Registered Reports. A Registered Report (RR) is an empirical research article for which the theory, methods and proposed analysis are pre-registered, reviewed, and in-principle-accepted for publication prior to data access.
We are energized by the reaction we have received to this policy change. Some were hesitant and skeptical (1) and, as described below, we take these concerns seriously. But most scholars who publicly expressed an opinion were supportive and often enthusiastic. Multiple scholars express the view that “this is how science should be” (1, 2). More than 150 showed interest in this innovation by attending our first workshop on Registered Reports. And more than 1,000 political scientists have voluntarily signed up to review Registered Reports,willing to give some of their time to make RRs work.
We are committed to open science principles and we think that promising ideas should prove their value in practice. Therefore, Registered Reports at the JOP will itself undergo a rigorous evaluation. As we demand pre-analysis plans for all experimental studies, we also have registered a pre-analysis plan for how we will evaluate our own experiment. Admittedly, in some regards, our plan falls short of the standards we set for Registered Reports in our RR Guidelines as the plan lacks specificity on the analysis details. Still, the plan gives you an idea of the expectations we have for Registered Reports.
Overcoming systemic dysfunctionalities
Our main expectation is that Registered Reports help overcome systemic dysfunctionalities in academia. While political research plays a vital and often useful role in informing policy decisions, academia’s institutional structures suffer from well-documented shortcomings that prevent scholarly research from reaping its full potential. One systemic dysfunctionality is the exclusion of certain voices from the scholarly conversation (Alter er al. 2020). Another systemic dysfunctionality is the exclusion of certain findings from the published body of knowledge (Rinke/Wuttke 2021). Registered Reports address the latter.
We do not need to worry when a single study gets it wrong as we may hope that, eventually, in the long run the accumulated political science wisdom converges at the best-possible description of the world. Yet, this expectation is futile when the body of scholarly literature is subject to systematic distortions. If systemic pressures lead to the exclusion of certain studies from the published body of knowledge solely based on the results of these studies, then the published literature no longer represents the best but a biased estimate of what political scientists know about the world (Wuttke 2020).
Unfortunately, meta-scientific evidence shows that whether researchers will feed their research findings into the body of published literature not only depends on the quality of the research but also on the outcomes of their research as the academic system privileges spectacular and simple findings that tell a gripping story (Gerber et al. 2010, Geber/Malhotra 2008, Nelson et al. 2018, Nosek et al. 2022, Richie 2020). Replications of social science studies show that replicated estimates do not oscillate around the original findings but differ predictably: replicated estimates are almost always weaker than the original result and often close to null at an overall average at half the effect size (Camerer et al. 2018). Similarly, tracing the fate of 221 social science studies funded by a research competition shows that studies were three times more likely to end up published in an academic journal when results confirmed the researcher hypothesis compared to null or mixed results (Franco et al. 2014). Publication bias is so strong that some literatures are almost stripped entirely of null or mixed results such as on democratic innovation where only one in twenty published studies report failures (Spada/Ryan 2017). How do we end up in a situation where the body of published research differs from the body of conducted research so that less spectacular and more messy findings are predictably excluded from published knowledge?
Academic incentive structures are built to reward clean, novel and hypothesis-consistent findings (Nelson et al. 2018, Nosek et al. 2022, Richie 2020). As professional researchers we have learned to play by academia’s rules, so know how to give the gatekeepers what they demand. We make our findings fit the initial hypothesis (Zigerell 2017; Lenz/Sahn 2021), switch the hypothesis to make it fit the data (Kerr 1998) or leave undesired findings or entire studies unreported (Franco et al. 2014, 2015). All of these practices combined let research in our and our disciplines look more spectacular, clean and convincing than they really are.
Systematic distortions in the political science literature sound abstract but they have practical, detrimental consequences for people’s lives because political science studies matter: not all but some of our discipline’s studies will eventually change the course of countries, societies and impact individual life trajectories (Lupia 2021). Whatever we study as political scientists we must thus do our best to get it right. As a journal, our job is to align the incentives so that your main concern is getting it right. Therefore, policies that pressure researchers to make results seem interesting, spectacular or clean should be avoided because they invite researchers to give us the results we want and not the result they got. Because Registered Reports issue conditional acceptance decisions before results are observed, they relieve researchers from the pressure of tinkering with their results to tell a gripping story. Therefore, our main pre-registered expectations is that Registered Reports will more frequently show mixed and null results — and that would be a good thing.
Hopes and skepticism
In the past, political scientists have made positive and negative experiences with Registered Reports. Comparative Political Studies reported on mixed experiences with with a special issue on results-blind reviewing in 2016. The Election Research Preacceptance Competition, organized by Brendan Nyhan and Skip Lupia also in 2016, attracted little interest. We believe that, first, there are specific lessons we can draw from these experiences and that, second, there are good reasons to expect to be different this time.
What we learned from these experiences is the need to provide guidance to authors and reviewers on how to navigate this new article format. We published detailed guidelines for authors, reviewers, an FAQ and conducted workshops for authors and reviewers. This is a direct learning from the CPS experience where reviewers felt uneasy to review this new form of results-blind manuscripts.
Moreover, in lightspeed the field has learned tremendously about new publication and research practices. In 2016, pre-registration was an oddity but within a few years it has become a standard practice in parts of our discipline. Younger PhD students in particular find it normal to present pre-analysis plans at conferences and receive feedback on their research designs. This also helped other scholars to learn how to review manuscripts that come without a results section. This culture change puts us in a much better position to introduce Registered Reports compared to only ten years ago.
It may therefore not surprise that 300 journals mostly in other disciplines have already managed to implement Registered Reports and that more experiments in political science were also successes. Last year, German Political Science Quarterly published an innovative Special Issue (Introduction, Conclusion) around the German election in a Registered Report format, showing how even secondary analysis of observational survey data can be credibly pre-registered when the questionnaires are released before the data.
This is a trial
While we are therefore hopeful, we cannot know for sure whether RRs work as intended. We introduce RRs as a trial with a fixed endpoint and will then take the time to carefully evaluate our experiences. This may also help other journal editors to make informed decisions about whether RRs are for them. We opted for the time-limited trial with a subsequent evaluation phase because the tenure of this editorial team has a fixed endpoint. We allow submissions in a specific time window that gives us a good chance to complete the editorial processing of the submitted Registered Reports during our tenure and conduct the evaluation. Therefore, please do not be surprised when in October 2023 the JOP will no longer accept new submission of Registered Reports.
Also, as an author and reviewer please bear in mind that things are untested and new for us, too. Hence, the editorial process will be imperfect, mistakes will be made and many open questions will have to be resolved along the way. Yet, even if the path may be rocky at the beginning, we are glad that many of you are willing to go with us. We are looking forward to editing your submissions.
About the Editor
Alexander Wuttke is the JOP´s Special Registered Reports Editor.
He is Professor of Digitalization and Political Behavior at LMU Munich. His research focuses on challenges to liberal democracy from the perspective of ordinary citizens. His area of expertise encompasses quantitative methodology, political psychology, public opinion and empirical democracy research.