“A preregistration plan is a plan, not a prison.”

Alexander Wuttke on his open science experiences

Photo of Prof Dr Alexander Wuttke

The three key learnings:

  • Pre-registration helps researchers to define and structure their hypotheses and analysis plans prior to data collection. This not only increases transparency, but also methodological clarity and reduces subsequent distortions.
  • In the case of registered reports, the peer review takes place before the data is collected. Publication does not depend on the result, but solely on the relevance and quality of the design. This reduces the pressure to produce only “positive” results.
  • While scientific journals are increasingly adopting open science practices, there is often still a lack of structural standards in policy advice. Instruments such as preregistration, registered reports or adversarial collaboration could help to strengthen the credibility and impact of scientific evidence – even beyond the academic publication system.

How did you personally come into contact with the topic of Open Science?

AW: In my generation, the replication crisis and the so-called Open Science Movement were part of the formative experiences of academic socialisation – as one would put it in political science or educational science, for example. Many of the developments that are now summarised under the term “replication crisis” took place during my doctoral studies. It became clear that numerous studies were not reliable, that systematic biases such as publication bias and p-hacking were widespread – as were several prominent scientific scandals. This phase coincided with my entry into scientific work. In dealing with issues of scientific integrity and transparency, I also came into contact with the topic of replication as part of this broader debate about credibility in research.

There are quite a few voices in economic research who say: “The replication crisis is a problem of psychology – it doesn’t affect us.” Have you noticed a broader crisis in the social and behavioural sciences?

AW: Whether the term “crisis” is really appropriate here is certainly open to debate. However, I don’t see this as an exclusively psychological or social science phenomenon. In fact, I know of very few scientific disciplines in which at least individual symptoms of this problem have not been identified. The phenomenon extends far beyond the social sciences. One example is the Cancer Biology Project – an attempt was made to replicate central studies in cancer research. In the majority of cases, this was not successful. We are also familiar with comparable results from drug research and other scientific fields. This points to a structural problem: it is less about discipline-specific weaknesses and more about the incentive systems under which science is conducted. Researchers are under pressure to publish a lot – not necessarily to publish the truth. These disincentives have a cross-disciplinary effect. In this respect, it is not surprising that replication problems occur in many scientific fields. However, it is also true that replication problems have been investigated with greater effort in some disciplines than in others, which may have contributed to the misperception that the replication crisis is limited to a few disciplines.

You were socialised as a scientist in the midst of the replication crisis. What was your personal approach to open science?

AW: My approach to Open Science was essentially motivated by the question: How can credible scientific findings be made possible? This is about questions of scientific integrity, such as incentive structures and statistical procedures. Specifically, phenomena such as publication bias, p-hacking and related problems. I took my first conscious impulse from psychology – not so much because the problems there were particularly pronounced, but because the discussion was conducted particularly early and intensively.

At the same time, however, I also realised that similar discussions had existed on a smaller scale much earlier in my own discipline, political science – independently of the replication crisis. As early as the mid-1990s, there was a prominent call for replication, followed later by the initiative of journal editors who campaigned for open access to research data. In some respects, political science can certainly be described as an open science pioneer discipline in which certain open science practices and policies were established early on – without much public debate, but with consistent application. In electoral research, for example, it is a matter of course to work with publicly accessible secondary data. The provision of replication data was therefore an early practice in many places.

Against this background, my reaction to parts of the open science discussion was occasionally characterised by surprise – for example, at the fact that access to data is apparently not a matter of course in some disciplines. The idea of researchers not sharing their data was initially alien to me – it was simply never an issue in my own environment. But apparently this was common practice in other specialised cultures.

Is there a measure or practice within the framework of Open Science from which you have personally benefited as a researcher and which you would recommend to others?

AW: From my point of view, the biggest change in my own research practice was the introduction of preregistration. Or let’s say: one of the most significant changes – sharing data also has a big impact. But preregistration has fundamentally changed the way I work. It not only contributes to the credibility of scientific findings, but also changes the course of my own research process. You are forced to deal with crucial methodological questions even before the data is collected – in other words, at a moment when adjustments are still possible. In the past, it was customary to only deal with many detailed questions more intensively afterwards, i.e. after the data had been collected. It often turned out that important aspects had not even been considered before the data was collected. By then, however, the baby had already fallen into the well.

Pre-registration forces you to make these considerations in advance – and it often becomes clear that you do not yet have a well-founded answer to some questions. Irrespective of structural problems in the scientific system, preregistration is therefore also an instrument for better self-organisation. It helps to structure the research process – especially in experimental research or when collecting your own data, where subsequent changes are often no longer possible. The main advantage is that the relevant decisions are made while the design can still be changed.

Some researchers fear that preregistration restricts them too much – especially in exploratory analyses. How do you counter this objection?

AW: There is the apt phrase: “A preregistration plan is a plan, not a prison.” Preregistration is not intended to put people in chains, but to create transparency. The aim is to make it clear to third parties what was planned in advance in the research process and what only emerged when handling the data. This applies in particular to hypotheses: There is a significant difference between whether a hypothesis was formulated before the data was collected and then tested – or whether it only emerged after the data had been viewed. In the latter case, a hypothesis test never took place, because hypotheses were then derived from the data itself. This is by no means illegitimate, but it is important to characterise this difference openly. Preregistration does not prevent exploratory research or subsequent adjustments. It merely forces us to make transparent at which point which decisions were made. There are no sanctions or rigid guidelines. Preregistration does not guarantee true findings, but enables third parties to better categorise the credibility of individual findings. In this respect, it is a contribution to traceability – and therefore indirectly to scientific quality. The open science movement is fundamentally not about control, but about openness: with open data, for example, it is about the verifiability of results, with preregistration it is about the traceability of the path to these results.

Where do you usually preregister your studies?

AW: I generally use the OSF platform. Open Science Framework has now established itself as a standard and is used by most researchers – including me. It not only offers the opportunity to preregister basic aspects such as research question or hypothesis, but also allows further materials such as analysis code to be uploaded based on simulated data. We use this regularly, if time permits. This allows us to deal with analytical questions at an early stage: Which model should be used exactly? How exactly do we specify our statistical analysis strategy? Such decisions can be well documented on OSF. The platform is functional and, in my view, satisfactory overall, even if there is still room for improvement in terms of user-friendliness.

You have also been Editor for Registered Reports at the Journal of Politics for some time now. How did this role come about?

AW: The editor at the time, Vera Tröger, had introduced a new policy according to which experimental studies in the Journal of Politics must be preregistered in future. I commented on this in a blog post and noted that the bigger step for the discipline could be to introduce registered reports – i.e. pre-registration combined with peer review before data collection. She then contacted me, invited me to help shape the preregistration policy and together we set up an Open Science working group. This gave rise to the idea of introducing Registered Reports as part of a pilot project – and I took on the editorial supervision.

The Registered Reports format has hardly played a role in economics to date. Can you briefly explain what it is about and what advantage you see in it?

AW: Registered reports address two central problems in the scientific system: publication bias and p-hacking. Pre-registration – i.e. the prior definition of hypotheses and analysis methods – has become established in many areas, especially in experimental research, to combat p-hacking. This helps to avoid subsequent adjustments to the database.

However, preregistration alone does not solve the problem of publication bias, i.e. the systematic non-publication of undesirable or negative results. Studies that do not confirm hypotheses are less likely to appear in scientific journals – even though such findings are important for the advancement of knowledge. Think of drug research: if only studies that show a positive effect are published, a distorted picture of efficacy is created. We are missing the entire range of positive and negative findings.

This is exactly where Registered Reports come in. The idea is that peer review and editorial decisions are made prior to data collection – solely on the basis of the research question, the theoretical contribution and the research design. This prevents the later results – whether positive, negative or mixed – from influencing the publication decision. The editors make their decision solely on the basis of the relevance of the study and the quality of the research design – but not on whether the results are spectacular.

A very practical advantage is that the review phase takes place at a time when it has the greatest added value – namely when the reviewers’ comments can still contribute to improving the study before the data is collected. And: researchers receive a binding publication commitment if they implement the reviewed research design. This creates planning security – regardless of whether the results are in line with expectations or not.

If you invest several years in a project, it gives you security to know that the study will be published at the end – at least as long as a publication on your CV is the central criterion for scientific success, right?

AW: Exactly, that’s a central problem – especially for academics in the qualification phase without a permanent position. The uncertainty of the career path and the strong dependence on the success of individual publications put enormous pressure on them. If your professional future depends on whether an elaborately planned and financed study “works”, i.e. whether a hypothesis is confirmed, there is a strong incentive to embellish results – consciously or unconsciously.

This is not only a problem for the credibility of science, but also for the researchers themselves: They become dependent on a factor that should actually be impossible to influence in the scientific process – the empirical result. We should observe what the data shows, not hope or work towards it confirming something specific. However, if your career depends on it, this understandably creates stress and uncertainty and false temptations. Registered reports create a certain degree of predictability here. The decision to publish is made on the basis of the research question and design – regardless of the subsequent result. This can have a relieving effect, especially in early career phases.

However, it must also be said that this format requires a change. Research has to be planned differently because many decisions have to be made and justified before the data is collected. This also has practical implications – for example for third-party funding, where deadlines and funding logic are often not aligned with the format. It therefore not only brings advantages, but also changes the processes that many have been used to up to now.

Could you explain these practical implications a little more?

AW: It means a considerable change in the research process whether I can decide for myself, for example, that I develop my hypotheses in the third month and start collecting data in the sixth month – or whether I first work out my research plan in the third month, subject it to a peer review and then have to wait a long time for a decision, which may be positive or negative. If the review is negative, I may have to try again with another journal without knowing exactly when a positive decision can be expected. Only then can I start collecting data. This can be problematic – for example, if the duration of my project ends at this point or approved funding expires. Registered reports are therefore not ideal for every type of research project.

What feedback do you receive from the reviewers on the assessment of the research design in phase one?

AW: More than ten years ago, the journal Comparative Political Studies ran its first pilot project for results-blind review, which was largely received negatively at the time. Many reviewers couldn’t get to grips with the format and wondered why the manuscript ended abruptly without results. We have learnt from this experience. We deliberately invested a lot of effort in raising awareness and built up a pool of reviewers who know and support the format. When we announced the pilot project for Registered Reports, over 1,000 colleagues volunteered in response to our call – an impressive number for a voluntary review commitment.

Many reviewers particularly appreciate the fact that their comments, which they have put a lot of work into, can actually be incorporated into the study design. It is much more satisfying to criticise, explain and clarify methods if they can be taken into account before the data is collected. This is also evident on the part of the authors: an internal evaluation – still unpublished – indicates that most authors find the process constructive and helpful because they receive early feedback that they can implement immediately.

Are there any indications that Registered Reports have changed the nature or significance of the published results – for example with regard to the frequency of null findings?

AW: There are indeed meta-scientific studies that compare registered reports with conventional articles. A central expectation of the format is that the number of so-called null findings will increase – i.e. results that contradict or do not confirm the original hypotheses. This is exactly what can be observed: A study in psychology by Anne Scheel and colleagues found that significantly more null findings are published in Registered Reports. This suggests that the format actually contributes to the reduction of publication bias. There are not yet enough cases available for the JOP to carry out a reliable analysis. But this will also be possible in a few years’ time.

So far, registered reports have mainly been driven by individual journals. Could this development not also be promoted more strongly by science policy – precisely in order to avoid the systematic non-publication of non-significant results? And who could take on additional responsibility here?

AW: Since scientific publications are currently – and presumably will continue to be in the foreseeable future – the central currency in the scientific system, I believe that scientific journals are a sensible starting point for reform. This is precisely where registered reports come in, i.e. at the interface between research and publication. At the same time, however, you raise an important point: there is also a lot of scientific work, particularly in the field of policy advice and policy evaluation, where publication in journals is not a priority at all. It is precisely where credible and transparent research would be particularly important that the institutional standards for quality assurance have so far been the least established.

I regret that many of the now well-proven reform approaches – such as preregistration or registered reports – have hardly been implemented in these application-related fields. However, large clients of evaluation studies – such as ministries or foundations – could set new standards here. For example, through mandatory pre-registration or formats that follow the principle of registered reports. In my view, this would be the next logical step in the further development of scientific practice: the transfer of tried and tested procedures from the academic publication system to neighbouring areas such as policy advice.

You said that the incentive systems in the academic world could be improved – I agree. Where would you start to fundamentally reform the system?

AW: I would specifically introduce preregistration, registered reports and adversarial collaboration in policy-related evaluation and policy advice. There is a need for reform here in particular – and politicians would have the opportunity to set such standards. In other areas, we as researchers must drive change ourselves. But if politicians want a reliable scientific basis for decisions, for example to prepare for future crises, this is precisely where trustworthy procedures should be established.

What is Adversarial Collaboration?

AW: This is a procedure that can help to achieve credible results, especially in controversial research fields. The basic idea: two research teams with different perspectives – for example, supporters and critics of masking requirements – jointly determine which study design is suitable for fairly examining the research question before collecting the data. They define in advance which conclusion is permissible for which results. This prevents each team from interpreting the data retrospectively in such a way that it fits in with their own expectations. In other words, we let the data speak for itself and enable genuine learning, despite and precisely because of our entrenched preconceptions. Adversarial collaboration aims to minimise bias – by jointly agreeing on comprehensible standards before the data is even available. This strengthens credibility – even in politically sensitive issues.

Suppose a young economic researcher reads this interview and thinks: “That sounds convincing, I want to do that too.” What advice would you give to someone who doesn’t work at LMU? How do you get into the subject?

AW: Everything we have talked about is aimed at generating trustworthy scientific findings. And I think that’s a central concern of almost all researchers: We invest a lot of time and effort in our work because we want to know what is actually true. Of course, career aspects also play a role – and these are sometimes in tension with scientific ideals. But the desire to contribute to reliable knowledge is the real motivation for many.

In this respect, it is not surprising that many of these new practices have spread so quickly. They also remind us why we originally decided in favour of science. But it is also important to remember that science – and open science even more so – does not mean perfection. You can always preregister more thoroughly, document even better, make even more reproducible. But it’s not about implementing everything at once. Sometimes it requires compromises – with your own demands, with your time budget. And that’s perfectly fine.

My advice would be: just start somewhere. If pre-registration is no longer possible because the data collection has already been completed, you can still disclose the data and analysis code. Perhaps also the survey instrument used. That is already a relevant contribution – and one that can be easily reconciled with your own scientific self-image. If you have made an honest contribution to understanding the world better and not misleading anyone, then you can look back on your own scientific career with confidence.

Thank you very much!

*The interview was conducted by Dr Doreen Siegfried on 10 June 2025.
This text was translated on 16 June 2025 using DeeplPro.

About Prof Dr Alexander Wuttke:

Prof. Dr. Alexander Wuttke conducts research at the Geschwister-Scholl-Institute of the LMU Munich in the field of digitalisation and political behaviour. Wuttke is Special Editor for Registered Reports at the Journal of Politics. He also leads initiatives to conduct replication studies – for example as part of the “ Crowdsourced Replication Initiative ” – and is committed to ensuring that research results are robust and replicable. He is also involved in LMU’s Open Science Centre and contributes his expertise in workshops and lectures on topics such as “Registered Reports: Hype and Reality”.

Contact: https://www.gsi.uni-muenchen.de/personen/professoren/wuttke/index.html

Website: https://www.political-behavior.digital/

LinkedIn: https://www.linkedin.com/in/alexander-wuttke-aaa940241/

OSF: https://osf.io/vur7x/

ResearchGate: https://www.researchgate.net/profile/Alexander-Wuttke



to Open Science Magazine