Open science works when curiosity is the driving force
Martin Binder on his experiences with open science

Photo: © Körber-Stiftung/David Ausserhofer
The three key learnings:
- Open science is not an all-or-nothing approach, but a gradual learning process. Open science practices cannot be fully applied in every project. It is more a matter of assessing what is realistic in each case, whether it be data provision, pre-registration or clean documentation. This takes the pressure off and makes implementation manageable.
- The benefits of open science depend on your own mindset. Those who see transparency as an opportunity for learning, exchange and more reliable results find it enriching.
- Intrinsic motivation is more stable in the long term than strategic orientation towards publication incentives. Those who conduct research out of a genuine interest in knowledge tend to see open practices such as pre-registration or transparent documentation as supportive because they help to improve their own scientific product. Those who see research primarily as a career game, on the other hand, are more likely to see open science as an obstacle.
What personal experiences have you had with open science?
MB: The experiences are varied and depend heavily on the respective field. I see one development in the provision of data. This has been gaining in importance for several years. Specialist journals are increasingly demanding this. At the same time, many researchers want to make their data sets accessible themselves. I try to do this too, as long as the legal conditions allow it. Where possible, I upload the data to platforms such as OSF. In economics, however, we still find pre-registration somewhat difficult. The process requires hypotheses to be established before data analysis. This is not always practical in my field of work: in happiness research, for example, I often work with large household panel data sets that are updated annually and provided by the DIW, among others. In this case, true pre-registration is hardly possible because the data sets are already available and it is not credible to claim that they have not yet been viewed.
How has your own approach to open science practices changed over time?
MB: I have been working more intensively with open science practices for several years now. I find the basic ideas behind open science very convincing and see it as an important step forward. In my view, the publication of non-significant findings is particularly important. Pre-registration offers a suitable framework for this. If an experiment is registered in advance and, ideally, already reviewed, a journal can agree to publish it before any data is even collected. This significantly changes the incentive structure: the focus is on clean methodology, regardless of whether the results meet expectations or not. I am gradually developing the concrete implementation of this. Only gradually have I become more aware of the importance of data availability. Together with co-authors, we are now planning experiments that we would like to pre-register. This is an approach we did not pursue in the past. Some of the data sets I am currently working on date back to 2017. At that time, pre-registration hardly played a role in our field. The process is therefore rather ongoing: whatever can be implemented in the next project is tried out. Where there are legal or organisational restrictions, you reach your limits. One example is newer data that we collected in cooperation with a German city. The responsible statistics authority allows its use but prohibits its publication. In such cases, there is little room for manoeuvre. We weigh up how much energy to put into coordination and sometimes accept that open provision is not possible for bureaucratic reasons. This results in a gradual learning process in which we decide which open science elements are feasible depending on the scope of the project and the framework conditions.
A question for you as a happiness researcher and expert on job satisfaction: many see open science as a way of intensifying collaboration among colleagues. Those who pre-register, share data or present project results at an early stage open up space for feedback during the development process. In the best case, this results in professional gains or even new collaborations. To what extent can this potential gain in collaboration be linked to job satisfaction?
MB: This directly touches on the need for social integration. If researchers see open science as an opportunity to work more closely with others and develop professionally, this can strengthen both the social and competence dimensions. Those who work openly on their own initiative, understand mistakes as a normal part of scientific practice and regard feedback as a resource can experience a noticeable gain from this. However, this depends heavily on one’s own attitude towards open science. There is not only a positive view. In the past, open science was also perceived by some as a threat, as a risk of becoming more vulnerable or as an attack on established scientific roles. Particularly in the context of the replication crisis in psychology, there has been significant criticism, and established researchers have disparaged young scientists who promote such principles of open science as “Stasi” and “replication police”. Therefore, I would say that open science can promote job satisfaction if the principles are experienced as meaningful and supportive. For those who see them as a burden or an intrusion into their own way of working, or even fear them as an attack on their own reputation, this effect does not occur. Some only gain a more positive perspective once they have gained practical experience. Basically, there are two sides to the coin: the potential for satisfying needs and thus increasing satisfaction, but also resistance that can have the opposite effect.
Many researchers start out in science out of curiosity and idealism, but later experience intense pressure to publish and compete. How does this shift from knowledge-oriented to career-oriented work affect job satisfaction?
MB: From the perspective of happiness research, there is no clear-cut answer to this question because several factors come into play here. Science is not primarily pursued by society to maximise the individual well-being of researchers, but to generate knowledge for society. A certain degree of competition can support this purpose. Without performance incentives, there would be a risk of resources being used inefficiently. As an economist, I would say: Competition can be productive if it is well designed. At the same time, it would be desirable for scientists to be able to work in a way that is personally satisfying. However, rankings, publication pressure and strict performance requirements have their downsides. A classic counterexample is the philosopher John Rawls, who was able to work for years without any pressure to publish and only then published his seminal work “A Theory of Justice”. Under today’s conditions, such a path would hardly be possible.
For young academics, this means strong external incentives and considerable pressure. From a motivational psychology perspective, this is problematic. Those who are primarily extrinsically motivated and define themselves strongly through external evaluation may achieve short-term satisfaction through recognition and comparison, but this form of satisfaction is not very stable. Intrinsic motivation, i.e. interest in a topic and the joy of gaining knowledge, has a more lasting effect. In this respect, the current system can reduce satisfaction, and many report this in personal exchanges. A study on the satisfaction of professors has also just shown that satisfaction is significantly higher where people can work more autonomously and are intrinsically motivated: namely, when they have been appointed and no longer suffer from uncertainty about future employment. I do not think that open science would automatically compensate for these effects. Nor does it mean that competition is fundamentally undesirable. Reputation and external recognition play a role and should continue to do so.
The key is balance. I always tell my doctoral students: yes, strategic publishing is necessary. Methodological knowledge and good planning are important. But just as important, if not more so, is a topic that you are passionate about. Without this intrinsic component, doctoral studies can quickly become stressful. Those who can combine both – the demands of the system and their own curiosity – are more likely to find a stable form of satisfaction.
Some researchers find social impact particularly meaningful: they want their work to contribute to important issues and gain public visibility in return. How does the search for meaning, combined with an open, dialogue-oriented research culture, affect job satisfaction? And how do you assess openness to input from other disciplines as well as from areas outside academia?
MB: Meaningfulness is a key factor in job satisfaction. However, what is experienced as meaningful varies from person to person. This also explains the differences in attitudes towards open science. Those who do not recognise its added value and tend to see its principles as a threat will derive little satisfaction from it. For others, meaning arises precisely from the fact that research becomes more transparent and robust. For me personally, open science improves the quality of my work. Mistakes are still possible, but the results are based on more stable foundations. In contrast to practices we know from the replication crisis, this leads to more robust research. This strengthens identification with one’s own research and has a positive effect on satisfaction. This observation is consistent with a project we conducted on job satisfaction in the skilled trades. It showed that employees were particularly satisfied when they were responsible for a product from start to finish – something holistic that reflects their skills and provides recognisable benefits for others. Professional identity was closely linked to this feeling of having created something complete. I have had a similar experience in science. When a research project becomes coherent, when a question is answered and a text is finished, a comparable feeling of craftsmanship emerges. A manufacture of knowledge. This perspective has helped me to understand scientific work as a craft process: creative, but at the same time methodically precise and focused on a clearly defined result.
Translated, this means that openness towards other disciplines and towards people outside academia can be meaningful if it is experienced as enriching and improves one’s own research product. However, if external expectations dominate the process or restrict autonomy, this can have the opposite effect. It is therefore crucial whether openness is perceived as an extension of one’s own scope of action – or as pressure.
How important is mutual trust – for example, when sharing data or ideas – for well-being in everyday research?
MB: Trust plays an important role. We know from happiness research in general that people with a high level of basic trust in others tend to be more satisfied. This can also be applied to the work context: those who trust their colleagues usually work more calmly and experience greater satisfaction. However, it is not clear whether open science strengthens this trust. Much depends on how researchers themselves perceive the principles. If transparency is experienced as meaningful and supportive, it can promote trust. However, if data is disclosed because a journal requires it, this hardly creates a basis for trust. In such cases, peers continue to be perceived more as competitors. The competitive elements of the scientific system do not disappear with open science. It therefore remains unclear to what extent greater transparency actually creates a more trusting atmosphere. Trust remains important – but it does not arise automatically from open practices. And open science strengthens society’s trust in the work we scientists do!
Many journals, funding bodies and science policy stakeholders now require data, materials and analyses to be shared. At the same time, warnings are occasionally issued about ‘open washing’. From a behavioural economics perspective, this raises the question: how can openness be promoted without enforcing it exclusively through obligations? What is needed to ensure that researchers do not simply see this as yet another requirement, but voluntarily try out new approaches?
MB: My first intuitive reaction would be: what is wrong with obligations? In my view, many elements of open science, such as pre-registration, data transparency, and traceable analyses, are so important that they should be standard practice. Many scientific problems in the past arose because these fundamentals were lacking. We owe the taxpayers who finance us in many ways a duty of care! In this respect, I think it makes sense for journals and organisations to formulate appropriate requirements. Often, new practices prevail anyway with the generational change. From an economic perspective, the question of appropriate incentives also arises. If you want to promote openness, you should above all remove barriers, especially bureaucratic ones. For example, some universities have introduced very complex procedures for open data repositories, involving long forms and mandatory consultations. This is very off-putting. A streamlined process would make more sense: register, upload data, done. The same applies to pre-registrations. There are very complex forms, but also platforms where only a few details are required. Low-threshold offerings with high usability significantly increase usage. It is also important not to cover every potential uncertainty with bureaucracy. A certain amount of basic trust is part of it. Journals have already provided positive impetus, for example through badges for open data or open materials. Such visible signals can create additional motivation. Nevertheless, I stand by my opinion: many principles of open science are so central that they should not be implemented on a voluntary basis alone. When public funds finance research, transparency is a legitimate requirement. In my view, pre-registering projects or providing data and replication materials is therefore not an unreasonable demand, but rather part of good scientific practice and a legitimate requirement that society places on us as researchers.
If tools for open and transparent research are really easy to use, could this usability lead to researchers practising openness as a matter of course?
MB: Universities often try to protect themselves organisationally and legally, which then ends up in extensive forms. This effort inhibits use. Good technical solutions with high usability, on the other hand, work quickly and intuitively. In my view, OSF and AsPredicted have achieved a great deal because they make it possible to try out procedures with a low threshold. This has significantly lowered the barrier to entry for me and many of my colleagues and noticeably increased acceptance. However, usability is only part of the solution. The attitude towards open science remains crucial. It’s all about mindset. Those who perceive the principles as a threat will not be convinced by a user-friendly interface alone. Persuasion based on content is needed, and in some cases it will certainly be necessary for the system to set clear guidelines so that certain standards become binding.
Some scientists are considered early adopters and are very consistent in their use of open practices. Other researchers tend to be more influenced by their environment and consider whether they should also get involved because colleagues in their own working group are already working with open science practices.
MB: In economics, we are relatively well positioned for this because research rarely happens in isolation. Many projects are carried out by teams of authors. This often brings together more experienced and younger researchers. The younger ones often contribute new methods and principles, while the older ones provide guidance. This mutual learning facilitates the introduction of open practices. In my view, we in economics are therefore moving in a positive direction overall in terms of the dissemination of open science elements.
Where do you see limitations? In other words, situations in which openness and transparency conflict with psychological needs or well-being in everyday research.
MB: Conflicts arise where openness is perceived as a threat, whether to one’s own identity, self-image or professional role. Science as organised scepticism is characterised by criticism, doubt and high standards for its own error culture. This is central to the process of discovery, but often stressful for the individual. Review procedures are a good example of this: feedback usually focuses on what is not working. One rarely reads “That’s an interesting approach” or anything similar. This makes sense from a technical point of view, but it is emotionally demanding. The situation is similar with transparency requirements. Anyone who discloses data runs the risk of others discovering errors. Even if one supports the basic idea, it rarely feels good at first when a published analysis turns out to be demonstrably flawed. If the study on which one’s career is based turns out to be flawed, it takes enormous strength of character to see this as an opportunity. I therefore understand why many established researchers took a very negative stance towards open science principles during the replication crisis.
Such reactions are normal and psychologically understandable at first. The decisive factor is whether one can accept one’s own mistakes and the refutation of one’s own research results as part of the scientific process. Openness also means having the “courage to be wrong” and allowing learning processes to take place. But not everyone finds this a relief. That is why voluntary openness practices have their limits. These tensions do not disappear on their own. Some reservations can only be reduced if certain standards become binding and thus become routine. Only through repeated experience does it become clear that openness is not necessarily a threat.
I consider the mindset question to be central. Shouldn’t it be part of my professional self-image as an individual researcher that I can only ever contribute one piece of the puzzle and that only many small building blocks together form a bigger picture? In other words, do we need more humility about the fact that scientific knowledge only arises through interaction?
MB: That brings us back to an important point. This perspective places research in the context of small, incremental contributions. Much of it is developed gradually, not in one fell swoop. That is why I often find it irritating when comprehensive policy implications are expected from individual studies. Particularly in the case of initial investigations with limited data, caution is advisable. Small projects are a legitimate part of cumulative science. They do not have to carry far-reaching statements on their own. This attitude can also be a relief: if an error is discovered in a limited study, it is not a dramatic setback, but part of a normal scientific learning process.
I am interested in your perspective as a behavioural economist who deals with work and motivation. What advice do you give to young economics researchers who are unsure whether to use open science practices – also with a view to their careers?
MB: With my own doctoral students, I always start with one basic principle: choose a research topic that intrinsically motivates you. Without genuine interest, it is difficult to persevere through the doctoral phase. At the same time, you have to understand how the scientific system works and what demands it places on you. The two go hand in hand: a relevant topic and a realistic view of publication opportunities. In practice, this is often complex. An example: a former doctoral student was working on his first essay. The study yielded several null results. For me, this is a completely normal result of the work and should be publishable. However, it is more difficult for young researchers because null results in many areas continue to have lower chances of success in specialist journals and such results are, of course, not very “exciting”. At the time, we included a finding that was not zero and discussed the remaining results. In retrospect, it was a balancing act. Without this finding, I might have harmed the doctoral student if the paper had been rejected, even though the open presentation would have made scientific sense. But we were also lucky with the reviewers: in one review, we were actually encouraged to present and explain the null results centrally. Such tensions arise frequently: what would be good scientific practice does not always coincide with what serves career advancement. Colleagues sometimes even warn young researchers against pursuing certain topics or open practices too early . This shows how limited the scope for manoeuvre in the system often still is, especially if you are not employed on a permanent basis.
Nevertheless, I believe it is important for young researchers to learn open principles early on. My doctoral students always work with open science elements wherever possible. For me, this is not a question of incentives, but rather part of good scientific practice. Regardless of this, my most important advice as someone who conducts satisfaction research remains: intrinsic motivation is more effective in the long term than any external reward. Those who view research exclusively as a strategic means to an end will find it difficult to work with satisfaction. On the other hand, those who find a topic that truly interests them are better able to cope with the inevitable demands and uncertainties of the system.
Many report that precise documentation and well-thought-out pre-registrations help them enormously. In their view, clarifying questions such as “What do I want to investigate?”, “What results do I expect?” or “How do I deal with alternative findings?” at an early stage significantly improves the quality of research.
MB: Absolutely. I see it the same way. Such a “design-based approach” is extremely helpful. For me, there is no question that such steps improve scientific work. That is why I expect my doctoral students to apply these practices, not as an additional task, but as part of good methodology. But I also expect the same from myself! Just as I evaluate certain statistical methods as appropriate or inappropriate, the same applies to careful planning and transparency. Preliminary considerations, sample size, decision rules and necessary variables are central to sound research. The replication crisis has made clear the consequences of neglecting these aspects. A recommended overview of this is Stuart Ritchie’s book Science Fictions, which clearly summarises the central problems and opportunities for improvement in recent years. Pre-registration, open data and precise documentation are not a panacea, but they are important steps towards more robust results and a better understanding of the methodological challenges that have emerged in many areas.
Thank you very much!
The interview was conducted on 21 November 2025 by Dr Doreen Siegfried.
This text was translated on 13 January 2026 using DeeplPro.
About Prof. Dr. Martin Binder:
Prof. Dr Martin Binder is Professor of Social Science Economics at the Faculty of Political and Social Sciences at the University of the Federal Armed Forces in Munich. He conducts research in applied microeconomics with a focus on behavioural economics and satisfaction research. His work often lies at the interface between psychology, political science and sociology. Prof. Dr. Martin Binder previously worked at the Bard College Berlin ( ), the University of Sussex, the University of Kassel and the Max Planck Institute for Economics. He received his doctorate and habilitation from Friedrich Schiller University in Jena and studied economics, business administration and philosophy at RWTH Aachen University.
Contact: https://www.unibw.de/universitaet/berufung/neuberufene/prof-martin-binder
Contact: https://mbinder.net/
LinkedIn: https://www.linkedin.com/in/bindermartin/
ORCID.ID: https://orcid.org/0000-0002-5053-3804
