Openness requires structures that promote cooperation, not competition
Thiemo Fetzer on his experiences with open science

Copyright: ZBW; Photo: Bettina Außerhofer
The three key learnings:
- Open science is more than just free access to data and publications. The decisive factor is how knowledge is used, contextualised and communicated. Research therefore requires reflection and responsibility in dealing with its results.
- At present, the scientific system rewards visibility rather than substance. A genuine opening up of science requires different structures. Ideas include, for example, person-related funding, random elements in the allocation of funds and more scope for original, high-risk research.
- Open science needs clear rules, fair access and shared values. Only in this way can it lead to collective responsibility and prevent science from becoming a stage for attention rather than a place of knowledge.
In public debate, science is often thought of in binary terms, as either “open” or “closed”. Open science is considered more transparent, comprehensible and socially effective. Do we need to break out of this binary way of thinking? Does science need new dimensions in order to have an impact on society?
TF: I think we need to understand more clearly the spaces in which science operates. In economic research, we are increasingly observing that research generates narratives, i.e. stories that shape social perception. These narratives can have real economic consequences. Let me give you an example. In a study, we examined how media reports on terrorism can influence economic dynamics, for example in the Global South, where many countries depend on tourism. After an attack in Tunisia with a particularly high number of British victims, the British media reported on it intensively. As a result, the British government evacuated tourists from the UK. German travellers, on the other hand, mostly stayed where they were. Tunisia suffered economic losses that far exceeded the actual security risk and had to cope with significantly fewer tourists from the UK over a long period of time. The media coverage at home ensured that the attack had a long economic impact.
This shows that information creates economic realities that can cast a long shadow. And what applies to the media also applies to science. My former doctoral student Eleonora Alabrese has demonstrated very nicely how research hardly ever corrects itself. This gives rise to responsibility. Research should not only be valid and reproducible, but should also reflect on the effects it can have. Open science can help to make these processes visible. But openness alone is not enough. What is crucial is how we deal with knowledge and how this knowledge is passed on to society, for example in the form of stories and narratives. This means that we must also consider the possible shadows that our own research can cast.
I think it is important to involve the public more in the decision-making process, how such a programme came about in the first place, and what the risks of inaction were. And in order to enable such a discourse, we need open data, transparent decision-making processes, independent communication of scientific results, a broad understanding and perhaps even a new social contract that enables a government to work in this way.
In science, we often talk about societal impact, i.e. the desire to be socially effective. At the same time, research is highly competitive and strongly economised. Visibility, citations and quantitative impact factors determine success. When science and politics work together, two logics collide . Would you describe the claim of economics to exert social influence as risky under these conditions?
TF: Yes, I consider it dangerous in a way because it can distort incentive structures in science. When social influence becomes an end in itself, and this is increasingly demanded by public research funding in the UK, competition quickly arises for attention rather than knowledge. Research then becomes a stage rather than a process of discovery. I also observe this among young researchers. Attention can be seductive. It’s a sugar rush. When a paper goes viral, scientific success is often confused with media visibility. That shifts the focus away from substance and depth and towards quick output. I am convinced that good research does not need marketing. It convinces through quality and relevance, not through volume. And unfortunately, the impact of research cannot always be gauged by metrics such as citations, as these can be sharply distorted or even manipulated. This is also related to the fact that the publication process has not adapted to technological realities. There is no longer double-blind, but the processes still run that way. There is a lot of wheeling and dealing, and there are many people who have knowledge that they should not actually have if the process were working properly. This turns publishing into a game. I have never participated in this myself, as it contradicts my ethical boundaries. But unfortunately, this seems to be the current state of affairs. The result will be that research, especially in the social sciences, will become less relevant.
An example from my own research practice shows why open data, traceable methods and transparent decision-making processes are important so that results remain verifiable, the use of political evidence can be traced and public debate does not depend on selective communication. In the summer of 2020, the United Kingdom introduced “Eat Out to Help Out”. My analysis concluded that subsidising restaurant visits increased infections and accelerated the second lockdown. Anyone interested in the details can find them on my website.
It may also be important to protect the scientific process from attention when it is clear that this is not conducive to gaining knowledge. And then a conflict of interest arises, because universities and sponsors want visibility. Politicians, on the other hand, should not seek out research that confirms an existing narrative, but should allow research that asks questions that have never been asked before. This is particularly important in the social sciences, because here narratives, consciously or unconsciously, help shape political and social change.
How can science be made more independent without losing its social relevance?
TF: The funding system is actually the lever, but unfortunately, that very system is often too cumbersome. By the time a new programme is set up, many research questions have long since evolved. This means that, in the end, you want to or will work on different topics than those you formulated in your application. Unfortunately, as with the review process for research papers, evaluators often read the applications unintentionally, which subconsciously influences them . This calls into question the usefulness of an application. Inertia is also deeply ingrained in the system. When it takes a good 1-2 years from application to project start, much of the research is simply no longer relevant. And honestly, many research questions or thematic approaches develop over many years. That is actually a good “sign”. But anyone who wants to work at the frontiers of knowledge quickly reaches their limits with such structures. Unfortunately, I have therefore experienced research funding more as a burden than as support. Especially since the process is also squeezed into bureaucratic structures that are unfortunately anachronistic and can sometimes deeply intrude on the privacy of researchers.
If we want science to remain innovative and relevant, we need to change the system and perhaps focus less on topics and more on person-oriented funding. This should be supported by targeted mentoring, which should receive similar recognition to publications. This can also accelerate generational change. I sometimes have the feeling that this is where the problem lies. In addition, empirical research should be given special attention, because there is simply a great deal of freedom and mistakes are bound to happen. Research is simply a “craft”.
However, research also requires trust. Existing controls should be significantly more efficient and digital. Researchers should have maximum freedom in the use of funds, but also clear transparency of expenditure. This can be ensured if there is a public ledger in which the respective transaction and receipt are stored behind each expenditure. At present, it seems to me that a large part of the administration at universities, precisely because of the slowness of the system, specialises in juggling resources and budgets back and forth. This leads to a loss of accountability, which in turn creates mistrust.
If funding is to be allocated more to individuals in the future, how can we assess who is eligible for funding without once again rewarding only those who conform?
TF: That is the crucial point. If you really want to promote independent research, you have to fundamentally change the evaluation system. Today, publication lists and third-party funding are used as criteria for success. This favours precisely those researchers who adapt most strongly to existing structures. The system educates its children. Instead, I would focus more on research awards and personal funding that reward courage, originality and agility. This allows for more risk-taking and greater diversity in ideas, perspectives and life paths. Innovative approaches often arise where people have different experiences and a broader contextual window than their own experiences. Innovation does not arise in the silos that actually exist.
Such a restructuring would safeguard individual freedom of research, but there is a risk of creeping politicisation if a transition is made to personal funding. However, this can also be safeguarded. In the United Kingdom, for example, part of the funding is now allocated randomly. This reduces distortions caused by application rhetoric or networking and increases equal opportunities, including for non-native speakers. Such a system could be further developed with a gradual graduation to larger ticket sizes and the associated broader responsibility.
At the same time, we would have to open up the funding process, i.e. allow more randomness, involve broader communities and give younger researchers a greater say. This would reduce the influence of closed networks in which established players reinforce each other. And we should ask ourselves honestly whether some structures at the top of science have been in place for too long. When the same people always decide on topics and careers, it blocks innovation. I then only create as many minions as possible who remain in the same subject area. Research thrives on curiosity and change – not on the cultivation of academic fiefdoms.
Transparency is considered a central ideal in the open science community. Research should be comprehensible, verifiable and collaborative. You have said that openness is important, but not sufficient. What would be your ideal scientific enterprise?
TF: I believe that many elements of an ideal scientific system already exist, albeit outside of traditional science, in private-sector contexts. Platforms such as kaggle.com demonstrate how research can function as a global, open competition. Teams from all over the world work on so-called challenges as competitions, for example for the classification of text, satellite images or protein structures. A research prize of $50,000 is then offered, which is won by the best performing team. In the end, the best model wins – comprehensible, comparable, transparent.
This logic could be partially transferred to the social sciences. We, too, could rely more on shared data sets, open replications and comparative analyses to make research more robust. Instead of individual teams working in isolation, many could research the same questions in parallel – like a collaborative quality control.
Would it be better if science stayed out of politics altogether?
TF: Science should not stay out of politics, but it should be aware of its impact. Research can reinforce political narratives, sometimes unintentionally, as the media system has similar incentive problems to the scientific publication system. In science, the influence on narratives is particularly problematic when studies are based on small or distorted data sets, which are then read as universal truths. I see some problems in behavioural economics in particular, where information experiments with online samples can sometimes generate very sharp narratives. This often happens with topics that are highly politically charged, because that attracts attention. The authority of science, scientists or academic institutions lends weight to such findings, even if their significance is limited. This comes with responsibility: we must carefully examine which evidence or narratives are actually reliable and where research itself becomes part of the political discourse. A look behind the scenes often reveals that some narratives are driven by subpopulations that react particularly strongly. These details are quickly lost in the media. Openness can help to make these boundaries visible. But I believe that this is precisely why it is becoming increasingly important for the personality of the researcher and their path to knowledge to become part of the context, as this may explain why certain research questions were asked in the first place.
Meta-studies show how important it is to examine results across many studies. Only when different approaches yield similar results can one truly speak of robust evidence and share the results with non-academic expert groups. The idea behind this is: as a researcher, don’t take yourself too seriously – you are only part of the whole.
TF: Yes, I agree. Meta-studies are essential for establishing scientific robustness. Unfortunately, the incentive structures here are not particularly good. And there are also structural problems here, because there may simply be limits to the generalisability of social science questions. I find knowledge graphs to be a very helpful construct in this regard. These are data structures that systematically link research results, methods and data sets, often directly connected to a semantic level. In my work, which has not yet benefited from research funding, I have created several such graphs, which I believe are very important. In any case, they reveal connections that remain hidden in individual studies. I am currently working on a project on production networks that attempts to do just that: to structure trade and economic data in such a way that it becomes interpretable and connectable. Such approaches can help to make research more transparent and cumulative, i.e. not only to generate knowledge, but also to link it in a meaningful way.
What role can artificial intelligence play in the further development of open science and meta-studies?
TF: Artificial intelligence can fundamentally change meta-studies. Classic meta-analyses condense results into simple categories, such as “works” or “does not work”. In the process, a lot of context is lost. This is often purely methodological, as the measurement strategies are often simply not compatible. With AI, it is much easier to access the raw data and thus develop harmonisation standards from the microdata. This allows complex relationships to be identified and new hypotheses to be derived. Much of the hard work can now also be automated, elevating the role of the researcher or group of researchers to a more abstract, but no less important, space.
Let me give you an example: in agricultural research, there are thousands of studies on carbonised plant material that binds CO₂, can replace synthetic fertilisers and at the same time make the soil more resilient. Each of these studies examines different conditions – climate, plant species, soil structure, production parameters. With a knowledge graph that captures all relevant parameters, this knowledge could be systematically linked, for example using a theoretical model of biochemistry. This would make it much easier to identify gaps in knowledge and actively steer the direction of research. However, such an easily accessible and queryable knowledge graph does not yet exist. AI can then recognise patterns or develop hypotheses, for example, about which combinations or processes are particularly effective ( ), thus generating new knowledge, a kind of meta-research on a large scale.
This is more difficult in the social sciences. There, knowledge is often based on narratives that are woven around statistical correlations. But these narratives condense a lot of often very important contextual information and thus generalise. As long as scientific careers and funding logic remain more strongly influenced by storytelling and the interaction of these stories with the reader or evaluator, this area will remain susceptible to political or institutional influence, which can sometimes be very subtle. AI can help to recognise patterns, but it does not replace critical reflection on power, motivation and responsibility in research itself.
If you could give our readers from the field of economic research three thoughts or ideas, something that researchers should consider in the context of open science and reformed science, what would they be?
TF: OK, I’ll keep it brief. First: reflect on your motivation. Why are you interested in this particular research question? What personal experience, attitude, life experience or social construct leads you to this topic? Second: live what you research. If you believe that your research can bring about social change, how can you contribute to this yourself, whether in your environment or in your working methods? Science should not only observe, but also exemplify. Thirdly: build community. Seek out people who are interested in similar questions, and try to work with people who may have a different value structure, because this way new forms of research can be developed together. Science thrives on exchange, not competition.
But we must also remain realistic: at the moment, the scientific system is producing more research than it can process. With the availability of AI, we are experiencing a kind of “nuclear explosion” of “knowledge production”. Suddenly, everyone can work with huge amounts of data. I see this as very problematic in many dimensions. In the current system, which is still sharply shaped by the existing incentive structure, this continues to lead to a race for attention and publications, often without deeper reflection. In the end, technology may and will perhaps steer us onto a path where we approach everything with a little more humility. Institutionally, the next few years will determine whether we learn to manage research and data access in a targeted manner, i.e. to distribute resources, computing power and attention in a meaningful way. Only then can open science actually function efficiently and responsibly. A public data infrastructure plays a central role in this: it creates fair access, ensures quality and enables scientific authority based on transparency rather than volume.
The European Open Science Cloud (EOSC) aims to make scientific research data openly accessible across Europe, across disciplines and national borders. However, there is also controversy within the open science community as to whether complete openness is still appropriate in this day and age. The question arises: should research really remain completely open, or do we need protected, legitimised infrastructures to ensure the responsible use of knowledge?
TF: Openness is a fundamental principle of scientific work, but it needs clear framework conditions. We cannot simply “keep the system open” as long as the incentive structures remain unchanged. If attention and visibility are the most important currency – “Attention is all you need,” according to a famous paper from AI research – then openness easily becomes a stage for self-promotion rather than a means of promoting knowledge. Geopolitics and the governance of the resources of our digital lives – data – also play a latent role in all of this. It often seems to me that some companies deliberately collaborate with well-known researchers or universities with their data treasures. This distorts the process of knowledge production and reinforces systemic inequalities. Sometimes it may simply be strategic marketing, or an attempt by companies to strengthen their corporate social responsibility profile.
It is important that skills, resources and responsibility come together in a meaningful way. Access to data, computing power and funding must be managed in a more targeted manner, not to limit knowledge, but to use it wisely. This also includes questioning the narcissism in the scientific system and embedding the individual more strongly in a collective responsibility. Openness therefore requires structures that promote cooperation, not competition. Only when science changes its incentives and establishes common rules for responsible action can an open research infrastructure such as the EOSC realise its full potential.
Thank you very much!
The interview was conducted on 20 October 2025 by Dr Doreen Siegfried.
This text was translated on 28 January 2026 using DeeplPro.
About Prof. Thiemo Fetzer, PhD:
Thiemo Fetzer is Professor of Economics at Warwick University and the University of Bonn. He is also a visiting scholar at the Bank of England, a member of the Centre for Economic Policy Research (CEPR) and a fellow at the British National Institute for Social and Economic Research (NIESR). Thiemo Fetzer’s research covers a wide range of economic topics, from international trade, economic development and financial markets to spatial and political economics.
Contact:https://www.trfetzer.com/
LinkedIn:https://www.linkedin.com/in/thiemo-fetzer-46360a38/
Bluesky:https://bsky.app/profile/trfetzer.com
GitHub:https://github.com/trfetzer/
Mastodon:https://mastodon.social/@fetzert
