Pre-registration facilitates teamwork

Open science experiences by Franz Prante

Foto von einer Wanduhr und einem Wandkalender auf türkisfarbenenm Untergrund

The three key learnings:

  • Pre-registration is not a straitjacket, but a tool. It is not about committing yourself early on and blocking every creative idea, but about documenting the research process transparently. Deviations are possible – as long as they are comprehensible and justified.
  • Whether in meta-analyses or smaller projects, recording research questions, hypotheses and work steps from the outset not only improves traceability, but also benefits teamwork and connectivity for new team members.
  • Open science is a cultural change, not a finished standard. International exchange, practical guidelines and embedding in education at the earliest possible stage are important for open science to become a matter of course.

Last year, you gave a presentation on meta-analyses and pre-registration at the Leibniz Open Science Day in Berlin. In it, you emphasised the importance of pre-registration in the context of meta-analyses. In your view, what key problems should this open science procedure address in economic research?

FP: I highlighted two key aspects. First, the advantages for the scientific community. Pre-registrations ensure transparency and reproducibility because hypotheses and methods are defined at the outset. This helps to curb problematic practices such as selective reporting, p-hacking, HARKing (Hypothesising After the Results are Known) and other forms of result-distorting selection. It is also important to distinguish between pre-formulated confirmatory and exploratory hypotheses. The latter are also an important part of science, but should be clearly identified as data-based.

Secondly, pre-registration also has advantages for one’s own research process. It forces researchers to specify research questions, hypotheses, data bases and methods at an early stage. This allows potential problems to be identified more quickly, promotes discussion within the team and improves the overall quality of the projects.

When I conduct a classic study, how does pre-registration differ from a meta-analysis?

FP: In principle, the idea is the same: it’s about transparency and traceability. But there is no “one-size-fits-all” solution. In a traditional experiment, the design is usually relatively clear, from the selection of test subjects to the evaluation. Here, the plan can largely be implemented as it was defined at the outset. The situation is more complex with meta-analyses. They consist of many steps, the course of which cannot be fully predicted. Even the search for literature raises questions: Which databases are included, which search terms are chosen, how can blind spots be avoided? At the outset, it is unclear how many studies will be found, which of them are relevant and whether their results are comparable. Unexpected problems that require adjustments also frequently arise when coding variables or calculating and standardising effect sizes.

For pre-registration, this means that you should document the planned steps as precisely as possible, while at the same time disclosing where flexibility is needed. You have to be aware that you cannot anticipate everything in a meta-analysis. In our case, we described how we would evaluate certain data – but only on the condition that this information was actually available in the studies. The more precisely you run through such scenarios in advance, the better. Nevertheless, meta-analyses always involve decisions that can only be made during the research process.

Let’s say you define the search process and the keywords you want to use in the databases in advance. Now the initial results show that many hits are not relevant. Would you then specify in the pre-registration how you would proceed in such cases, for example by refining search terms or using a more precise search string?

FP: Yes, exactly. We actually solved this problem in this way, and I would also recommend it, by testing the process with pilot searches before pre-registration. This is documented in our pre-registration, under the section Stage of Synthesis, point Piloting of Study Selection Process. There we describe which databases we use and that we have tried out different search terms. For example, we initially only looked at the first page of Google Scholar to check whether any relevant results appeared at all. This approach is also recommended in the relevant guidelines: before deciding on a final search string, you should test whether the selected terms actually work. It is important not to extract any data from the studies themselves during this test phase.

At the beginning, you spoke about advantages for the scientific community and for research in general, but also about advantages for individual scientists. What has been your personal experience?

FP: For me, and also for younger or less experienced team members, it was particularly important to take a bird’s eye view. Meta-analyses are very extensive and often take many months, sometimes more than a year. It helps to understand from the outset why we are taking certain steps and how they will fit together later on. This promoted exchange within the team and quickly led to moments of insight, especially in the interaction between experienced and younger colleagues.

Another advantage became apparent when we brought in new team members during the course of our two pre-registered projects. Thanks to our comprehensive documentation and pre-analysis plan, they were able to get started right away and support us, for example, in coding primary studies. This was crucial because we identified many more relevant studies than originally expected through AI-assisted searching. Without this structure, we would have struggled to cope with the additional data volumes. However, pre-registration made it possible to smoothly and easily integrate additional colleagues who could understand exactly what we were working on.

Regarding AI-assisted searching: Does that mean you simply used tools such as ChatGPT to give the command “Find me literature”? Or how did you proceed specifically?

FP: When we conducted the search, ChatGPT had just been released and its potential applications were still unclear. We therefore worked with ASReview. This is specialised open-source software in which we first give the AI a so-called prior set – a small selection of studies that we consider relevant based on titles and abstracts. The AI then receives the entire remaining data set with the task of sorting the studies according to relevance. The software then suggests those papers that it considers particularly relevant. We researchers review these suggestions and provide feedback, acting as researchers in the loop, so to speak: “That was the right decision” or “That wasn’t appropriate.” On this basis, the AI continuously learns and refines the sorting process. This improves the ranking step by step until there are hardly any relevant studies left in the remaining data set.

Have you also published the prompts?

FP: Strictly speaking, they are not prompts, as ASReview is not a chatbot. What we have published are the data sets that we have marked as relevant to the AI. We have also documented which technical algorithm was used. The entire decision-making process is also accessible. The project files allow you to trace the entire process and view the project in its entirety.

Back to pre-registrations: are there situations in which pre-registration does not make sense?

FP: Basically, I always consider pre-registrations to be possible and useful if you understand them correctly. They are not a straitjacket. Deviations from the original plan are permitted as long as they are justified and made transparent. In every research project, there is always an initial question, hypotheses and an initial idea of how to collect or evaluate the data. As long as that is the case, an initial plan can be formulated.

The crucial question is not so much whether a research project can be pre-registered, but rather whether and when to register the pre-analysis plan on a public platform such as OSF. In our projects, we spent several months working on the pre-registration, running through scenarios and refining the plan before publishing it. Publishing too early, for example when the project is only vaguely outlined and many aspects have not yet been thought through, can actually be a hindrance because it restricts the creative process too much. On the other hand, it makes sense to start structuring with from the outset, but to choose the moment of publication carefully. Again, the key thing here is not to perform any data analysis before pre-registration that could influence hypothesis generation or the choice of methods.

When we talk about standardisation and the establishment of pre-registrations, what role do you think international networking plays?

FP: International and institutional networking should play a greater role in open science and pre-registration in general. There needs to be a continuous exchange about what best practices are, what guidelines could look like, and what recommendations can support researchers. In my view, it is important to first recognise the issue as a cultural change. I still have difficulty with binding requirements. I believe that we are not yet culturally ready for pre-registration to be practised as a matter of course in all areas. The aim should be to gradually develop this as a matter of course. International and interdisciplinary exchange is indispensable for this. Mandatory regulations, on the other hand, can have a deterrent effect if they are introduced too early. It depends very much on the context: in experimental designs, the practice is already well established, so binding requirements may be easier to justify. In other areas, we are not yet at that point. In addition, if the requirements are insufficient, there is a risk that pre-registration will be misused as a superficial signalling device. That would be wrong.

I wasn’t thinking of mandatory standards at all, but rather of international exchange. If, for example, an institute has already developed a checklist or guideline, one could look around and use it as a guide.

FP: I think that makes perfect sense. Ideally, further developing such instruments should be a community effort. This clearly requires international exchange. I am absolutely in favour of that.

You also presented a template at the Leibniz Open Science Day 2024. What is the idea behind it?

FP: The aim of the template is to provide guidance to researchers and make our own experiences available. It is intended to show how to set up a pre-registration for a meta-analysis in concrete terms and what points need to be considered. It was important to us not to keep it abstract, but to refer specifically to meta-analyses in economics. Of course, there are already various guidelines, such as reporting guidelines for meta-analyses or recommendations for high-quality meta-analyses. Our approach combines such existing templates with the elements for pre-registration and pre-analysis plans, as well as specific requirements from economic meta-analytical research. This creates a practical tool that makes it easier to get started and can serve as a structural aid in the research process.

What does economic research need in order to raise awareness of pre-registration and make it more standard practice?

FP: When you start with a method such as meta-analysis, you look at guidelines and literature anyway to understand the process. This is exactly where pre-registration should be discussed at an early stage as an option that offers clear advantages, while at the same time referring to contact points where further information can be obtained. This is already happening in some disciplines and is considered important. Accessibility is also crucial: researchers need clear, practical guidelines that show step by step how to implement pre-registration.

If you think even bigger, however, it is about cultural change. The topic should be anchored in scientific education, ideally as early as undergraduate studies. Why not start with a pre-analysis plan for empirical bachelor’s or master’s theses? This teaches students early on to plan research projects in a structured manner and to document work steps transparently. This not only improves quality and traceability, but also their own work practices.

Your template covers various sections, from research questions to the analysis plan. In your experience, which parts cause the most difficulty?

FP: Piloting the data search process is particularly time-consuming. It takes a lot of time, but it is crucial in order to have a solid basis later on. Another challenging point is the standardisation of effect sizes. You have to think through as comprehensively as possible in advance which cases may occur, how different measures can be converted into each other and how to document this. This requires a lot of conceptual work.

Added to this is the question of how hypotheses should be tested and which analyses are suitable for this. In meta-analysis, there is a wide range of statistical procedures and meta-regression methods. Depending on team capacity, not everything can be mapped immediately, but it is important to have a clear understanding of which methodology is appropriate for the available data. This also took us a considerable amount of time.

Looking at the topic of open science more generally, what advice would you give to young researchers who are just starting out and want to gain their first experience with open science practices?

FP: My most important advice would be to ensure consistent documentation right from the start. In my view, that is the core idea behind pre-registration: that the entire research process is recorded transparently. This helps not only externally, but also internally: I myself want to be able to understand in a year’s time what I did today. This mindset is crucial. This point is also made by psychologist Moin Syed, who has written a highly recommended essay on persistent myths about open science (Syed 2024). In addition, I would recommend first reading up on existing resources – for example, via platforms such as OSF or, in German-speaking countries, via the Verein für Socialpolitik’s initiative on open science. There you will find the basic ideas and elements of open science. On this basis, you can then implement concrete steps in practice. The same applies here: learning by doing.

With regard to the Open Science Working Group of the Verein für Socialpolitik, in which the ZBW is also involved, do you think open science is already on its way into the mainstream, or does it still need a lot of persuasion?

FP: I see both. At conferences that deal specifically with meta-research, there is a very strong awareness of transparency and open science. At larger and broader economics conferences, however, I still often come across work without comprehensive documentation, sometimes even in journals where the underlying code is not accessible. The situation is better in top journals, where there are clear guidelines and an obligation to upload data and code. This is positive and drives change forward. But there is still room for improvement, for example in the documentation of intermediate steps. In our meta-analyses, we have made every single coding decision traceable. Such practices could become more widely established. That is why initiatives such as those of the VfS and the ZBW or events such as the Leibniz Open Science Day are so important – also for the exchange of experiences across disciplines. It must also be more worthwhile for researchers to invest in open science. For example, we were very fortunate that our project recently won the SaxFDM Open Data Award thanks to its comprehensive documentation. However, there should also be stronger incentives for open science on the part of universities and third-party funding bodies.

If we look beyond the scientific field: fake science, paper mills, AI-generated content and, at the same time, increasing scepticism about science – doesn’t this mean that researchers have an even greater responsibility to create transparency, even if it does not bring any direct benefit to them individually?

FP: Absolutely. Science is being questioned by society, and transparency is the most effective antidote. By definition, fakes cannot be transparent. If they were disclosed, it would be immediately apparent that they are not real science. However, transparency is not only important for society, but also for scientific progress. Research thrives on the fact that we can build on the work of others. I also like the model of open-source software development, where it is standard practice to document, write tests and have changes reviewed by several people. This community principle protects against errors and abuse. We need a similar professional self-image in research. Transparency and reproducibility should not be a “hobby” but must be understood as a core task.

Thank you very much!

The interview was conducted on 15 September 2025 by Dr Doreen Siegfried.
This text was translated on 12 January 2026 using DeeplPro.

About Dr Franz Prante:

Dr Franz Prante is a research assistant at Chemnitz University of Technology. He is currently working on meta-analyses of the effects of conventional monetary policy and energy price changes. He also conducts research on the socio-economic effects of green government spending.

Contact:https://franzprante.de/

LinkedIn:linkedin.com/in/franz-prante-30270839a

GitHub:https://github.com/franzprante

Essay mentioned in the interview:

Syed, M. (2024). Three Persistent Myths about Open Science. Journal of Trial & Error, 4(2). https://doi.org/10.36850/mr11




to Open Science Magazine