The moralising trap in the context of open science has been overcome in business administration

Daniel Wentzel on his experiences with Open Science

Foto von Professor Dr. Daniel Wentzel

Photo © Daniel Wentzel

The three key learnings:

  • The replication crisis has brought about lasting change in business research, not through moralising, but through binding standards. Pre-registration and data sharing are now simply a prerequisite in leading journals. Clear rules take the pressure off; they put an end to the endless moral debate and create reliability.
  • ‘Publish or Perish’ remains the fundamental structural problem. Open Science can create transparency, but it cannot alleviate the existential pressure on early-career researchers. As long as journals effectively only publish significant results and global competition for publication slots grows, research questions will become increasingly narrow and intellectually risk-averse.
  • AI is the next unresolved challenge for scientific practice. What transparency and reproducibility should look like in the age of language models – for instance, through prompt archiving as a new equivalent to data storage – remains entirely open. The questions have been asked; the answers are missing. But the urgency of asking them is undeniable.

What role does Open Science play in your work? And was there a moment that changed everything?

DW: Before I answer: what do you mean by Open Science? The term is broad.

I see it as an umbrella term, ranging from pre-registration and Registered Reports, through data sharing and Open Access, to communication with the non-academic public.

DW: The decisive moment in the fields of marketing, business administration and psychology was certainly the replication crisis ten or fifteen years ago. When it emerged that a great many results could not be reproduced, questions arose about questionable research practices, ranging right up to prominent cases of alleged data fabrication. At some point, it became clear: things cannot go on like this.

Were you already familiar with this topic from your studies?

DW: My studies were some time ago. That came much later. It was only with the replication crisis that reflection began. What are the practices we actually use, and are they effective – and, of course, ethically sound? One problem with this debate, however, was that it was highly charged morally. It wasn’t so much the pragmatic question ‘What can we do better?’ that was at the forefront, but rather the moralising. That made it harder to gain a wider audience.

Do you see a shift happening there? Or are two camps simply irreconcilably opposed?

DW: Nobody is interested in the moral and ethical debate anymore. We’ve been there. Now we need to look ahead. And much of it has simply become standard practice, at least in my discipline. For anyone wishing to publish in the top journals, pre-registration is no longer an option, but a prerequisite. Anyone without one needs a very good argument. The same applies to data sharing. Not at the initial submission, but by the second round at the latest, the message is very often: upload your data, make it available. Exceptions only apply to proprietary company data. Everything else – experiments, surveys, interviews – is not up for debate. I find this extremely helpful. Clear standards take the pressure off the individual. Because where there is flexibility and room for individual decision-making, the moralising quickly returns. But in my view, this chapter is now closed.

We know from psychology that the fewer decisions I have to make each day, the more mental energy I have left for what really matters. Wouldn’t it be a relief for researchers if open science practices were simply the norm and the question ‘Should I do this or not?’ no longer needed to be asked?

DW: Yes, absolutely. The standards are set, and nobody would do things today the way they did ten or twenty years ago. That is clear to everyone, and that is a good thing. What I am increasingly observing at conferences and in conversations with colleagues, however, is a kind of backlash. The question is becoming louder: does this really get us anywhere? This debate centres primarily on two points.

The first concerns what is known as the ‘file-drawer problem’. I carry out twenty studies, but publish only the one that yields a significant result. The other nineteen disappear into a drawer. This creates a pseudo-accuracy that does not actually exist. Added to this is a question of resources. Only very well-resourced universities can afford to carry out so many studies. This gives them a strategic advantage that has little to do with research quality. The rules of the game have changed, but whether the system has actually improved as a result is another matter entirely . The second point concerns pre-registration itself. Do pre-registrations actually help us in the scientific discovery process? Research also thrives on serendipity – that is, on unexpected findings and ideas that only emerge during the process. Anyone who only pre-registers hypotheses they already know will hold water is avoiding the risky, the unexpected. The straitjacket intended to ensure quality can simultaneously stifle creativity and genuine discoveries.

Is that your personal opinion, or are you describing what is being discussed within the community?

DW: Both. There is lively debate. But I certainly share the scepticism. When I begin a research process, I often don’t know where it will lead. But if the system demands that I must know all that in advance, it becomes problematic. Pre-registration as process documentation – very good. Pre-registration as a predetermined conclusion – that is another matter.

In my view, pre-registration does not mean predicting the result, but rather making transparent how I approach a question, what my initial hypotheses are, which method I choose, and whether I am taking an exploratory or confirmatory approach. Isn’t that, at its core, simply sound methodological documentation?

DW: That’s how it should be. In practice, however, I often find that a confirmatory approach is assumed. Anyone who says, ‘I’m going to approach this exploratively and see what emerges’, is frequently told by top journals: ‘That’s not enough.’ And that then effectively means you have to set up studies all over again. You pre-register an exploratory study, find something, and then you’re told: “Now you know what you need to look for. So do it again, this time with a confirmatory pre-registration.” These are technical excesses that have little to do with the core idea of pre-registration. Because the core idea is undoubtedly sound, not only in terms of the quality of results, but also as process documentation.

I notice this in my own work. It is extremely helpful to be able to trace what one actually intended to do two or three years ago and how one arrived at certain results. The same applies to sharing analysis scripts. That, too, has now become standard practice, and rightly so. Very few people are disciplined enough to document this carefully of their own accord. If the structure gently compels you to do so, that is a very positive development. As for the question of what impact these standards have on the research questions that are actually still being asked, I still see some teething problems there. But I am confident that this will resolve itself once the community has learnt to handle these tools with greater confidence. However, I believe this is an issue that is currently being discussed quite intensively.

You described the replication crisis as a turning point. Was there a specific moment, an initial practice, when you realised: ‘This is changing my work and bringing me something’?

DW: I wouldn’t say it brings concrete benefits as such. Much of it has simply become standard practice, and that’s a good thing. But there is one experience that really surprised me in a positive way . We once applied for a Registered Report together with some colleagues. The aim was to replicate a hypothesis – one that fitted well within the context of the replication crisis. An exciting result, but how robust? That was open to question. So we submitted the proposal to a renowned journal, and the experience was extraordinary. What impressed me about it was this: normally, with a research project, you think tactically from the outset. What will the reviewers criticise? How do I position the manuscript to pre-empt criticism? That’s not a failure; it’s a normal part of the game. But it also means that the pursuit of knowledge doesn’t always come first.

With this Registered Report, it was different. For the first time in a long while, I asked myself: if I really want to answer this question, what would actually be the best study for it? Regardless of any tactical considerations. And reviewers and editors supported exactly that. The study was incredibly labour-intensive, but it was fun because we were guided by the pursuit of knowledge and not by the publication system. Yes, one could say: that’s how it should always be. But that’s just not the case, given the constraints under which we all work. Which makes it all the more remarkable when it does work that way.

Many begin their academic careers with genuine curiosity and a desire to explore questions with an open mind. At some point, the pressure to publish erodes this motivation. Could Open Science counteract this by offering a kind of liberation from the constraints of the system, guiding researchers back to their original interest in discovery?

DW: That would be a very high expectation of Open Science. And I believe it exceeds what these principles can achieve. The real pressure comes from the academic system itself, from this relentless ‘publish or perish culture, which isn’t getting better, but worse as the years go by. I often say this openly to my PhD students and postdocs. I have the luxury of thinking without being bound by outcomes. The worst that can happen to me is that I publish a few years less. For them, there is much more at stake in terms of their livelihoods. They do not have that luxury. They must conform or give up on a career in science. One can rightly find that problematic. As an individual, you cannot change it.

However, there are discussions about publishing even non-significant results in prestigious journals to reduce publication bias and open up new avenues for researchers, including early-career researchers. Do you see any realistic chances of this happening?

DW: The idea is good – but it’s not gaining traction. No editor would openly say they only accept significant results. In practice, however, that is exactly what happens. The reason is structural. Global competition for publication space is growing – from China, and soon from India – whilst the available journal space remains constant. As a journal with a strong reputation, I can afford to always select what are deemed the best results. I simply have no need for non-significant findings.

We investigated this in a dedicated research project, a text analysis of thirty years of marketing research across the most important journals. The result was sobering. The questions are getting smaller and smaller. The big breakthroughs are disappearing; what remains are narrowly defined questions with clear-cut answers, methodologically sound but increasingly lacking in intellectual boldness. In a follow-up study involving around fifty editors and department heads, we contextualised this culturally: the community as a whole lacks the courage to really change course here. A concrete example: a renowned journal has introduced a sort of ‘Replication Corner’. You submit your design and research question, get the green light – and are then allowed to pre-register. Sounds good. But the journal reserves the right to make a final editorial decision at the end. Is the result exciting enough? In doing so, it reduces the whole idea to absurdity.

That sounds counterproductive.

DW: Exactly. It’s a signal that makes no sense. If, in the end, I still don’t know whether my work will be accepted, the fundamental problem remains unchanged. Open Science can improve many things – but I doubt it can untangle this structural knot.

In business studies, there has long been a question regarding the discipline’s own scientific legitimacy – whether the subject is taught at university because it conducts genuine research or because it delivers student numbers. Is this focus on ever smaller, clearly measurable phenomena ultimately an attempt to resolve the discipline’s identity crisis through methodological rigour?

DW: That is definitely the case, and it is well documented. About fifty years ago, a kind of scientific council in the US scrutinised business schools and delivered a damning verdict: no research takes place here; arguments are based on gut feeling. Things could not go on like this. That was the starting point for the ‘scientification’ of business administration – a step that can be pinpointed very precisely in historical terms.

In Germany, this wave arrived with a delay of perhaps twenty years. I completed my PhD exactly twenty years ago and was part of the first generation to be told: a 400-page monograph that your professor approves of is no longer enough. Research must stand up to verifiable scientific criteria. In the meantime, business administration in Germany is very well positioned in terms of research standards.

Another factor is that business administration – and marketing in particular – has become a melting pot for other disciplines. Psychologists, statisticians and econometricians are moving into business administration and bringing their academic background with them. This has significantly shaped the methodological culture of the discipline.

But therein lies a problem. Whilst this development has sharpened our methodological skills, it has simultaneously obscured our view of the big questions. How do I run a business? What does it take to be successful in the market? These are the questions that business administration is actually meant to address. And it is precisely these questions that we are finding increasingly difficult to answer, because we hardly ever address them anymore. In our research project, we have seen just how strong this trend is and how much it obscures our view of what really matters. But that is a major topic in its own right.

But I’m now interested in another perspective. How are publishers actually behaving? Are there cases where a prestigious journal insists that research data be deposited exclusively on its own platform?

DW: Publishers do not seem to have reached that stage in their thinking yet. At the moment, the general view is that it is sufficient as long as the data is available in a reasonably trustworthy repository. It would therefore be perfectly acceptable to deposit the data with BERD – or whatever the platform may be called in future – rather than with OSF. That does not currently pose a problem.

My penultimate topic would be science communication. Do you actively apply your research findings from marketing science in practice, that is, within the community of marketing practitioners?

DW: Absolutely. Through workshops and interactive forums with companies, but of course also very much via LinkedIn. The platform has now largely replaced Twitter – or X, as it’s now known – as the preferred channel for science communication. This is a trend that can be observed across the industry – with varying degrees of quality, but the trend is unmistakable.

Do you get any feedback? When you attend a practitioners’ day, lead a workshop or publish a post on LinkedIn, do practitioners get in touch with specific concerns or ideas for follow-up projects?

DW: Not so much via LinkedIn. That often serves more as a means of self-promotion than genuine exchange. The situation is different with presentations, forums and workshops. As researchers working empirically, we are, of course, dependent on access to corporate data, and this is almost exclusively gained through personal contacts. The logic behind it is familiar. You give a presentation, strike up a conversation with someone, build a relationship, and ultimately this leads to a collaboration. This doesn’t have to be financial in nature; often it’s about data access or implementing a field experiment.

So you would say that science communication acts as a bridge to practice?

DW: As a thoroughly robust bridge, not just a narrow footbridge. However, I would like to emphasise: this is not an achievement of the open science movement. Exchange with the business world has a long tradition in the German-speaking academic system. And that, incidentally, distinguishes us markedly from the US model. The German chair system, on the other hand, structurally has more points of contact with various stakeholders and thus often also better access to practical data.

Do you perhaps have any specific recommendations for researchers who are also active in marketing science or related business disciplines? Are there any formats or events that you would particularly recommend?

DW: One does not exclude the other. There is probably no universally applicable forum, but the Marketing Club, as an interest group with regional branches throughout Germany, as well as the German Marketing Association, offer numerous conferences and exchange formats with practitioners. My former rector put it aptly: “Every day you’re not in Aachen is a good day.” What he meant was: get out of the ivory tower and get involved in dialogue. I would wholeheartedly agree with that. Not least because it sharpens one’s own ideas. When I try to explain a thought to an audience outside my field and it doesn’t get through, it is rarely the audience’s fault. More often than not, it reveals that the idea itself has not yet been thought through precisely enough.

In this context, I am interested in the much-discussed theory-practice gap. I am myself part of the Leibniz Research Network for Evidence-Based Science Communication, which brings together practitioners with science communication researchers. It became clear right from the kick-off meeting just how difficult genuine collaboration is. Practitioners want answers to specific questions, as quickly as possible. Researchers, on the other hand, suggest that a topic may be too niche for a publication and that results can realistically only be expected in three or four years’ time. Added to this is the fact that a topic must be suitable for publication in a journal. How do you experience this tension in marketing research? And how do you deal with it?

DW: We are, of course, familiar with this fundamental problem ourselves. The needs of the business world and the interests of academia simply do not always align. And that will probably never be fully resolved, because academia should and must remain autonomous. In marketing, however, as a decidedly applied discipline, this tension is somewhat less pronounced than in other fields. I would, however, like to turn the question on its head. For me, close engagement with the business world is not a concession, but a strategic advantage. Those who maintain a dialogue with companies encounter questions that are relevant and grounded, and, in the best-case scenario, gain access to data and contacts that would otherwise remain out of reach. Incidentally, this applies not only to companies. Consumer protection organisations, NGOs and environmental groups can also be valuable partners. The stronger the data and the more convincing the research question, the more leeway there is in other areas too – for instance, when methodological details do not fully meet laboratory standards, but the relevance of the findings is evident.

Finally, a question regarding the VHB, where you were Head of Scientific Communication and Marketing. What role do topics such as open science, good scientific practice and transparency play in the association’s work – and how are they discussed there?

DW: For the sake of completeness, I should mention that I have since stepped down as chair of the commission. But these issues are certainly being actively discussed, particularly with regard to early-career researchers. The aim is to establish clear guidelines, as these issues are rarely addressed during undergraduate studies, and even in academic departments they are usually dealt with only implicitly – if at all. This is where I see the VHB playing a vital role: defining and communicating the rules of good scientific practice before young researchers fall into avoidable pitfalls.

A simple example: it doesn’t necessarily have to be about pre-registration. But take an ethics application for an experiment, even if the project is ethically unproblematic, such as when participants are simply asked to evaluate images of different car designs. Formally, this application is nevertheless required, if only to tick the relevant box. The next generation must be aware of such conventions. What are today’s standards of scientific practice? And why do they sometimes deviate from what one finds in older publications? Because standards are a moving target. In my view, the Association takes this task seriously – and that is a good thing.

How do you assess the development of Open Science – particularly in light of global challenges, the rapid advances in artificial intelligence and all the upheavals we are currently experiencing? Would you say that we now need to address this issue with renewed seriousness?

DW: Undoubtedly. I am convinced that Open Science will continue to grow in importance. Much of what we touched on at the start of our conversation – the structural friction points, the teething problems of a system that is still in its infancy – these are growing pains that will resolve themselves over time. The scientific system simply needs time to settle in, and I am confident that it will do so. What is hitting us with full force right now, however, is the AI wave. And that is the truly pressing question we face: how do we safeguard good scientific practice in the age of ChatGPT and its successors?

And how can this even be documented?

DW: Well, naturally, like the entire scientific community, we use AI and LLMs in our research. A PhD student recently asked me a question that made me sit up and take notice: “If I use AI in certain stages of my work, do I actually have to archive the prompts as well? Is prompt documentation, in a sense, the equivalent of data storage?” I didn’t have a ready answer. But it certainly can’t do any harm. When in doubt: record everything you have. That will probably be the next big wave we’re heading towards. To be honest, nobody knows where we’ll end up. But as for whether transparency and documentation are becoming more important in this context, I answer with a clear, triple-underlined yes. Science won’t be able to avoid this.

Reproducibility is, of course, the key word here. Whether a result is actually reproducible in the end, even with an identical prompt – that remains to be seen. But the direction is clear.

DW: Exactly. At the very least, traceability. Ensuring that it is transparent what has been done and how – that is the minimum we must agree on. It will be a long time before we have reliable answers. But the question is being asked now, and the answer is: yes, emphatically. It is high time we addressed this seriously.

Thank you very much!

The interview was conducted on 18 February 2026 by Dr Doreen Siegfried.
This text was translated on 1828 March 2026 using DeeplPro.

About Prof. Dr Daniel Wentzel:

Daniel Wentzel is a professor of business administration specialising in marketing at RWTH Aachen University. He completed his degree in business administration, specialising in marketing, retail and business psychology, at the University of Cologne. He also completed a Master’s degree at the University of Auckland, New Zealand. Wentzel obtained his PhD in 2008 at the University of St. Gallen, where he also completed his habilitation in 2010. His research areas include consumer behaviour, the acceptance and adoption of innovations, product design, and service and brand management.

Contact: https://www.wiwi.rwth-aachen.de/cms/wirtschaftswissenschaften/Die-Fakultaet/Institute-und-Lehrstuehle/Professuren-WiWi/~bmyg/Wentzel-Daniel/?allou=1

LinkedIn: https://www.linkedin.com/in/danielwentzel/

ResearchGate: https://www.researchgate.net/profile/Daniel-Wentzel-4

ORCID ID: https://orcid.org/0000-0002-0170-4435

GitHub: https://github.com/dwentzel/




to Open Science Magazine