Open Science is an attitude that promotes responsible and future-oriented science
Sebastian Berger on his experiences with Open Science

Photo: Picture People Cologne
The three key lessons learned:
- Preregistration helps to reflect on analytical decisions in advance. It is important to see preregistration as a tool for improving quality, not as a formal obligation.
- Openly accessible data and analysis code create visibility and reduce the workload in the long term. Consistent sharing can help to increase reach and establish international collaborations. Good documentation not only facilitates external reuse, it also reduces queries and internal search costs. At the same time, it signals methodological rigour.
- Open Science is not only methodologically relevant, but also strategically important. Open Science practices can lead to better reviews, strengthen trust in results, and improve connectivity for interdisciplinary collaborations. Those who embrace Open Science as an attitude rather than just a formality are actively positioning themselves in a field that is rapidly becoming more professional.
How did you first come into contact with the topic of Open Science? When and in what context did you become aware of it?
SB: During my doctoral studies at the University of Cologne between 2008 and 2010, Open Science was not yet an issue for me. At the chair where I worked, neither power analyses nor preregistration or other practices in the sense of Open Science were discussed. However, I was involved in a DFG project that dealt with the robustness of psychological research – specifically in the field of personality psychology and diagnostics. Interestingly, these were also the fields that were initially relatively unaffected by the replication crisis, as replication was common and the research was often correlational.
I first came into direct contact with the problems of lack of replicability as a postdoc when I moved to an institute focusing on social cognition. There, intensive research was being conducted on priming effects. And it was precisely in this context that the well-known problems surrounding “false positives” became particularly apparent. During this phase, I came across the first publications by Joe Simmons, Leif Nelson and Uri Simonsohn – the group that later launched the blog Data Colada. This work made me aware of the systematic weaknesses in research practice at the time for the first time.
That was a turning point for me. I began to increasingly question the prevailing priming research and to grapple with the question of what results we should publish at all. At the same time, the first initiatives such as the Open Science Framework (OSF) emerged, particularly around Brian Nosek’s group. I would say I was relatively early to the game – in the sense of being an “early adopter”. As a postdoc, I had the freedom to work independently for the first time and implemented the principles of Open Science as soon as it was practically possible.
When you say “early adopter,” do you mean the use of the Open Science Framework (OSF) or something else?
SB: Partly. Initially, we mainly used AsPredicted; the OSF only came into play later in our work. The decisive factor was that we started using such tools early on and thus established a different research culture. When I joined Axel Ockenfels’ institute as a postdoc in 2013, I noticed a clear cultural difference. Although there was no explicit mention of Open Science, many principles were already in place – for example, that all hypotheses had to be clearly formulated in advance, and that data and analyses had to be fully prepared and documented internally on servers. These standards were already firmly established within the institute.
Formal preregistration via external platforms – i.e. the public documentation of studies prior to data collection – was added a little later. I would say that started for us between 2012 and 2015. Since then, we have implemented it systematically: confirmatory experiments are pre-registered, mostly via AsPredicted, and all data and analysis code are made openly accessible. In our papers, we then refer to the relevant repositories so that all information is fully traceable.
You mentioned a cultural change – initially, data was stored internally and later made publicly available via the OSF. How exactly did you experience this change? What specifically changed for you?
SB: For me, the decisive difference was that, with the change to the role of assistant professor, I was increasingly able to make decisions myself. This also meant that I could formulate clear expectations: anyone who wanted to work with me had to be prepared to work with the OSF and to support the principles of Open Science. I made this a prerequisite for cooperation – and in some cases, that meant discontinuing certain collaborations. Especially in areas such as social cognition and consumer research, where I was previously active, I found that the willingness to adapt to Open Science practices was rather low at first. That led to tensions, but for me it was clear: I only want to work on projects that take these standards seriously and implement them.
And how did potential collaboration partners react when you clearly signaled that cooperation would only take place under Open Science conditions?
SB: I didn’t usually communicate this in a confrontational way. Most of the time, I would say something like, “I would like to preregister this” or “I would like to disclose the analysis code.” This prolonged the processes, for example until an experiment could start. Often, the partners simply looked for other opportunities for cooperation. In the field of consumer research in particular, several collaborations have come to nothing in this way. It was rarely a clear break, but rather a gradual process. You’re busy with lots of projects anyway, and when priorities no longer align, your attention shifts. I noticed that I was attaching less and less importance to such projects and reorienting myself in terms of content – towards topics where Open Science practices were a natural part of the collaboration.
Did you see the withdrawal of potential cooperation partners as a disadvantage?
SB: No, I don’t see it as a disadvantage. I have always worked in an interdisciplinary manner – with colleagues from economics, business administration, psychology, neuroscience and even biology. This quickly reveals how different research practices are and that some collaborations come to nothing if certain standards are demanded. In some disciplines, there are understandable reasons for this. In neuroscience, for example, detailed preregistration is often difficult to implement because many analytical decisions can only be made during the evaluation process. There is a high degree of freedom, and not all decisions can be anticipated in advance. Preregistration that is too rigid can be unrealistic in this context – and make collaboration difficult or even impossible.
The same applies to field experiments, for example in projects with energy suppliers. There, we work with smart meter data from households. Unforeseen cases arise time and again, such as households that suddenly start using multiple electricity meters because they have additional sources of consumption such as swimming pools. This cannot be anticipated. Whether such cases should be included in the analysis or excluded can often only be decided sensibly in retrospect. Strict preregistration would not allow for such flexibility. In this respect, very narrowly defined preregistrations work well in controlled laboratory settings, but reach their limits in complex, real-world research designs.
If unforeseeable cases arise during the field phase – such as households with multiple electricity meters for pools or saunas – this can be communicated transparently afterwards, right?
SB: Yes, that’s a valid point. I see two fundamental attitudes towards preregistration, two “views of human nature”, so to speak. One group believes that preregistration helps honest researchers to do better research. I count myself among them. The other sees humans as fundamentally opportunistic, with the result that only strict rules and controls can prevent scientific misconduct. If you follow the latter view, flexible preregistration is not very effective because degrees of freedom are perceived as loopholes. Personally, I see preregistration more as a tool for self-reflection: it helps me plan my studies better – even before resources are invested. This noticeably improves my research.
In addition, preregistration has now become standard practice in many areas. Even sceptical colleagues are increasingly recognising that transparency contributes to credibility. This also has a reputation-enhancing effect. At the same time, I see the limitations of such procedures, for example in the field of sustainability research, where many people work qualitatively or exploratively. For these colleagues, deductive-experimental requirements such as preregistration often seem inappropriate. Although they also conduct research with good intentions, they do not fit into the dominant model. This results in real challenges in interdisciplinary exchange.
I would be interested to hear about your personal experience: you say that you support Open Science and are convinced that these approaches will help us learn more about the world. What reactions and advantages have you experienced yourself?
SB: I have experienced a clear generation gap. Younger colleagues take Open Science for granted. They are familiar with digital tools, can program and are experienced users of repositories. For them, this is often standard practice. Older colleagues, such as former supervisors or long-standing editors, were much more skeptical. It often took more persuasion. I noticed this tension between younger, tech-savvy researchers and a more cautious older generation very clearly in my field – and it is still noticeable to some extent today.
Do you have any advice for researchers who are the only ones in their department who are open to Open Science? How can you get colleagues on board in a constructive way, even if they are higher up in the hierarchy?
SB: Two strategies have proven successful. First, work with good examples. There are now recognised journals that publish registered reports. The American Economic Association also supports trial registries. So you can show that Open Science has long since arrived in the scientific mainstream. Second, respond to the person’s motivation. Those who are genuinely interested in knowledge are more likely to be convinced by substantive arguments, such as examples of undesirable developments or successful replications. For colleagues who react defensively or negatively, structural incentives are often the only thing that helps, for example institutional guidelines or funding criteria that require Open Science practices.
How do you deal with scepticism in Bern?
SB: What helps in such cases is to keep the costs of cooperation low. I offer to take on tasks such as uploading code or data – simply to lower the barrier to entry. If positive feedback on transparency, then comes in the peer review, such as “State-of-the-art data handling and code sharing – congratulations”, this is often more convincing than any argument made in advance.
I see – that’s a good approach.
SB: Exactly, positive feedback like this in peer review convinces many people. If you experience repeatedly that manuscripts with Open Science components are better evaluated or accepted more quickly, this also changes attitudes towards these practices. However, you cannot expect researchers who are about to retire – let’s say between 55 and 65 – to build up their own Open Science profile. That’s understandable and not necessarily their job. Much of the work is done across generations within teams anyway. Younger colleagues often have the technical know-how, for example for platforms such as GitHub, and take on the practical implementation. This division of labour works well and is a sensible way to gradually embed Open Science in everyday research.
Could one say that, in addition to interdisciplinary cooperation, intergenerational or interprofessional cooperation is also emerging – in other words, senior researchers contribute their experience and knowledge, while early career researchers contribute their Open Science expertise, and together they create a coherent overall project?
SB: Yes, exactly – even if it is not often explicitly discussed, this division of tasks usually works well. I have worked with very experienced colleagues who tend to take on an advisory role, while I or younger team members implement the Open Science components. This does not always have to be explicitly defined – it often emerges organically. I see myself in a mediating role: I support the younger members in familiarizing themselves with the work and relieve the older ones by setting clear responsibilities. This creates acceptance without overwhelming anyone.
What advice would you give to doctoral candidates who want to practise Open Science but are being held back by their supervisors?
SB: It is important that doctoral candidates are not disadvantaged by this – for example, in appointment procedures. Many simply do not have the opportunity to implement Open Science practices, even though they want to. This should be discussed openly. In addition, I recommend that anyone interested seek further training independently. There are local Open Science groups, online formats, high-quality YouTube channels and open training courses that are easily accessible. Even if you are not allowed to implement certain practices in your own dissertation project, you can still apply them in the background or prepare yourself methodologically – for example, through replication projects, side projects or your own preregistrations.
A common counterargument from supervisors is that Open Science is all well and good, but the effort involved is enormous. While you are uploading data and creating metadata, you could also be writing another paper.
SB: This argument is also familiar from other areas, such as laboratory safety or university administration. Yes, Open Science practices increase the organizational effort. But the scientific consensus is that the benefits outweigh the costs. In addition, the infrastructure is constantly evolving. Tools such as AsPredicted and automated repositories make many processes much more efficient today. The community is actively working to further lower these barriers through better tools, automated processes and support services.
Do you have a specific example where Open Science has really made a difference for you – such as new contacts, collaborations or visibility?
SB: Yes, a key moment was during the coronavirus pandemic. At that time, an international network of colleagues from various universities emerged – Amsterdam, Copenhagen, Chicago, Bern, Hohenheim, Lüneburg, all with a clear commitment to Open Science. It was not an official structure, but an informal yet binding collaboration. Every collaboration was based on Open Science principles – from preregistration to Open Data availability. And it was extremely successful: better publications, more invitations, awards, visibility. Early career awards now often explicitly mention that the use of Open Science was a deciding factor in awarding the prize. In many contexts, it is now almost taken for granted – or at least clearly advantageous.
Can you give a concrete example of how you implement Open Science in practice in this collaboration – and what has come of it?
SB: A central element is consistent sharing, whether it’s lectures, data, code, methods or simply experiences. During the pandemic, we started posting lectures online. This has created a reach that didn’t exist before. Suddenly, people around the world could participate without having to travel – that also changed a lot for me personally. The principle of “sharing” also has a big impact on visibility among younger researchers. Since we started systematically publishing codebooks, programming solutions, methodological tips, and sometimes even career advice, we have seen a stronger bond develop. At a meeting in Amsterdam, a whole group of doctoral students came up to me because they knew my work from the internet. Twenty doctoral students from Amsterdam know Sebastian Berger! That would never have happened before. This has concrete consequences: people know each other, exchange ideas and refer doctoral students between locations. Today, I can say: “Go to Copenhagen, Chicago or Amsterdam – there are good people there.” Maybe it’s a bit of a bubble, yes – but it has real effects. And last but not least, when I publish my code, I naturally check it more carefully. That also improves the quality of the research.
What you’re saying reminds me of a conversation I had with Michael Soliman from the University of Lüneburg. He’s involved in happiness research and said that many of the principles apply well to Open Science. His point was that stable social bonds require honesty and vulnerability – you have to be willing to show your imperfections. And that’s exactly what I see in Open Science: those who share early on – for example, unfinished analyses or open code – make themselves vulnerable, but at the same time create closeness and trust. Would you agree with that?
SB: Absolutely. I recently had a long conversation with a young colleague who is very talented but was very concerned that mistakes would be noticed if she disclosed her work. I told her that honest mistakes are not a problem – on the contrary. People who are taken seriously in science will forgive you as long as you are open about it. If someone finds a mistake in your code, you say, “Thanks for pointing that out – we’ll change it.” Of course, that can be uncomfortable sometimes, especially when there are many co-authors involved. But that’s exactly what improves research. Open Science means making your work available for genuine scrutiny – not just by reviewers, but by the entire community. In the past, the attitude was often different: if the reviewer didn’t notice the mistake, it stayed in. It was a kind of “gaming culture” where it was more about getting through. And that’s exactly what needed to change.
Absolutely.
SB: Today, there are formats such as Go Check My Code or Please Replicate Me, where you actively invite others to check your code or replicate your work. This openness has become part of everyday practice. What also strikes me – especially when it comes to visibility – is that I now regularly receive requests for meta-studies. That used to be a disaster. An email like that often meant a day’s work: first I had to find the old paper, then the right data, determine whether it was still accessible, whether there was a codebook – usually nothing was well documented. Today, it’s different. Last week, on Whit Monday, I received a request from Austria – just as I was returning from a hike. I was able to reply directly from my mobile phone: “Here’s the link, all the data and the code are online.” That was it. Not only is this more efficient, it also takes the pressure off. I no longer have to search for files or explain what the variables mean. Everything is in the codebook, which is available online. The responsibility then lies with the person making the request – and I have my head free for other things.
What has been your experience with Big Team Science so far?
SB: I have already been involved in several projects, some of which have already been published. I am currently working on several large projects – and my experience has been thoroughly positive. Big Team Science is a very fulfilling form of collaboration because you know that you are part of a larger, well-coordinated process. I have been involved in various roles: as a core team member, as a supporter, as an analyst or co-analyst, and also in data collection – for example, as coordinator for one country in an international project involving over 70 countries. Regardless of the role, the collaboration is enriching, methodologically challenging and very productive from a technical point of view.
How do the different roles work in Big Team Science projects?
SB: Big Team Science is a collaborative research approach in which many teams work together on an overarching question. The aim is to pool resources, data and methodological expertise – for example, for international studies or complex designs involving multiple interventions. The roles are clearly defined: a central core team is responsible for planning and coordination, while other teams – known as country teams – are responsible for tasks such as data collection, translation or local adaptations. In multi-analyst projects, many teams analyse the same data set to reveal methodological variation and the robustness of the results.
Each team makes a partial contribution to the overall project without having to carry out the entire study on its own. This not only promotes efficiency and quality, but also networking, visibility and a strong sense of scientific community. You really feel like you’re part of a team, and that’s very motivating. And you feel like you are contributing to a relevant issue. For example, when around 200 researchers from your own field are involved, and the field itself is not that large, you feel like you are making a real contribution. This also becomes apparent later on at conferences: suddenly, you are connected to many colleagues through joint publications. You are perceived as part of the community. The aspects you mentioned earlier from happiness research – such as the feeling of belonging through openness and cooperation – are very much present here.
I can well imagine that.
SB: Another advantage is that you gain deep insights into different research teams – especially if you take on a coordinating or leading role yourself. Such projects usually follow a similar structure: a responsible person, often a postdoc, takes on the overall coordination and acts as the first author. An experienced senior researcher is the last author of the project. In between, there is a core team, usually consisting of five to ten people, who play a key role in shaping the scientific direction and operational implementation. In addition, there are country teams responsible for data collection, translation or contextual adaptation in the respective countries.
So the team decides on the research design together?
SB: Exactly. And it goes even further than that – project management is also highly structured. I am currently working on the Heat Cognition Project, an international research network. All roles, responsibilities and processes are documented in detail. The document is around 20 pages long. It describes who is responsible for which tasks, how preregistration is carried out, where the data is stored, which publications are planned, what rights and obligations the participants have and how the collaboration is organised. This creates a high level of transparency and you learn an enormous amount, also with regard to international cooperation. In terms of content, the project investigates how heat waves affect human behaviour and psychological processes – a topic on which there has been little systematic research to date. Originally, we wanted to investigate this on a smaller scale here in Bern with a doctoral student. Now the same study is being conducted in parallel in cities such as Bern, Dhaka, Nairobi, New York and Lima. This not only makes the project more scientifically relevant, but also more attractive for publication. The reach is greater, the likelihood of citations increases – and at the same time, the knowledge gained is much broader than if everyone were to conduct a single study at their own location.
There is also an approach where all teams are given the same question and the same data set but choose their own methods. The aim is then to see whether and to what extent the results differ.
SB: Exactly, there are projects like that – and I am currently involved in one that is being submitted to Nature for review. The concept behind it is extremely exciting: you take a published result, i.e. an existing finding, and have many teams analyse the same data set, but using different methodological approaches. The basic idea is that intelligent and well-trained researchers all work methodically but can still arrive at different results depending on the analytical decisions they make. Our project is about systematically collecting such variability: How robust are published effects really? And how wide is the range of analytical approaches, even among experts? These so-called many-analyst studies show that methodological decisions play a central role – and that different approaches can lead to sometimes significantly different results.
A practical question: If a doctoral student says, “Big Team Science sounds exciting, I’d like to get involved,” how do they go about finding such projects? Are there platforms where they can apply?
SB: Such projects are usually advertised publicly – mainly via X or email distribution lists of specialist conferences. Sometimes, however, it’s more informal: those who are already involved pass on relevant information directly within their own network, for example to doctoral students or colleagues.
So if I don’t hear anything on social media or at conferences, would it be an option to simply write to the authors of existing Big Team Science publications and express my interest?
SB: Absolutely. That works well – and there is enormous interest in such projects. In one of our latest projects, which dealt with trust in science and science-related populism [Trust in scientists and their role in society across 68 countries], we received around 500 requests to participate within a few days.
One final question: Where do you see the topic of Open Science in the future?
SB: I believe that in a few years’ time, we will talk much less explicitly about Open Science because it will increasingly become the norm. Many practices that are still being discussed today will simply be taken for granted. This can be seen at several levels. When I started out, for example, we mainly worked with STATA – today, R is standard in many areas, partly because it is open source and facilitates openness in analysis. In psychological training, for example, SPSS is hardly used anymore – this shows how methodological standards are shifting. My prediction is that Open Science will become as natural as publishing in English, which is hardly questioned in German-speaking countries.
Thank you very much!
*The interview was conducted on 16 June 2025 by Dr Doreen Siegfried.
This text was translated on 12 August 2025 using DeeplPro.
About Dr Sebastian Berger:
Dr Sebastian Berger is a lecturer at the Institute for Sustainable Business at Bern University of Applied Sciences. In his research, he combines approaches from psychology, sociology, economics and environmental sciences to investigate the factors that influence individual and collective behaviour in the context of sustainable development. His work has been published in interdisciplinary journals such as Nature Climate Change, Nature Human Behaviour, Science Advances, Global Environmental Change and the Journal of Economic Behaviour & Organisation.
Previously, Berger was an assistant professor at the University of Bern and a postdoctoral researcher in sociology at Stanford University, in economics at the University of Lausanne and in social psychology at the University of Cologne, funded by the German Research Foundation (DFG) and the Swiss National Science Foundation. He holds a degree in economics (2008) and received his PhD in 2010 with a thesis at the intersection of social and economic psychology.
Contact: https://www.bfh.ch/de/forschung/forschungsbereiche/sustainable-business/ueber-uns/
LinkedIn: https://www.linkedin.com/in/sebastian-berger-80a526203/
OSF: https://osf.io/r3vde/