“We’re creating too many dead ends”
Jan Landwehr on his experiences with open science

Photo: Uwe Dettmar
The three key learnings:
- Open Science is not a dogma, but a tool. The key added value lies not in rigid compliance with rules, but in transparency. Those who clearly distinguish between what is exploratory and what is confirmatory conduct better research, regardless of whether journals require it or not. Pre-registration helps above all to avoid empirical dead ends and to ensure projects remain documentable in the long term.
- The publication system rewards the wrong things. The novelty bias of top journals creates a vicious circle. Increasingly minor effects are published, whilst replication studies find no space. At the same time, early-career researchers build upon findings that prove to be unstable. This is not an isolated problem, but a structural one.
- Business administration is in a transitional phase. (Social) psychology and business disciplines such as marketing operate according to different rules. Those who were socialised early on in the replication debate regard transparency as a matter of course. Those who have conducted research according to different rules for decades react with understandable scepticism. Open Science is therefore also a cultural issue – and a strategic opportunity for the discipline.
What role does Open Science play for you? And when did you first encounter the topic?
JL: For me, and presumably for many others, the 2015 Science paper ‘Estimating the reproducibility of psychological science’ was a defining moment. In my view, this study is often seen as the starting point of what was then termed the replication crisis. Initially, its impact was felt primarily within psychology, where it contributed significantly to a movement that placed greater emphasis on open science practices. That was quite a turning point for me personally. I am a psychologist by training, specialising in social psychology, and have been deeply fascinated by the discipline and its findings ever since my studies. Some of the studies we worked with during our degree suddenly no longer appeared to provide a reliable basis for statements about human thought, emotion and behaviour. It was unsettling for me to see that some of these findings could not be easily replicated or were at least less robust than assumed. This period also saw a number of cases of data manipulation by prominent researchers. This, too, shaped the debate and raised the question of how scientific practice can be organised in such a way that results become more reliable. In my case, this was the starting point for engaging more intensively with approaches summarised under the heading of Open Science.
Many psychologists report that the replication debate has shaken their understanding of established knowledge. Findings that were long considered established, or insights from experiments known beyond the discipline, are suddenly being called into question. Have you experienced something similar?
JL: Yes, I can certainly relate to that. This plays a particularly significant role in teaching. For instance, I actually go through my lecture slides every semester and ask myself, regarding many studies: Do I still consider this result plausible? Do I still believe in this finding? Is this a finding I still want to convey to students? Or would it be better to remove it? This also means that I occasionally remove studies that have been part of my courses for years. Looking back, I think: this study may have been in my slides for a decade, but I am no longer convinced that the reported effect is robust. In such cases, I remove it from the syllabus. Of course, there are also situations in which studies are officially retracted. In that case, however, it is clear that they should no longer be taught. In that respect, your description hits the nail on the head. When key findings are called into question, one’s own foundation of knowledge begins to waver. You lose the ground beneath your feet. At the same time, however, this also gives rise to a productive scepticism regarding which results are actually robust and contribute to an understanding of human behaviour, and are therefore worth passing on in teaching.
Let’s return to your own experiences. You described 2015 as a significant turning point. What happened next for you? How did your own research practice change?
JL: My career path is significant for understanding my research practice. My background is that I studied psychology and switched to economics after graduating. I obtained my PhD in marketing, later qualified as a professor, and now hold a professorship in marketing within an economics department. My teaching therefore takes place predominantly in economics, not in psychology. At the same time, I feel deeply connected to both disciplines. Roughly half of my publications are in the field of marketing, the other half in psychology. My participation in conferences is similarly divided. In a sense, I live an interdisciplinary life, because I actively conduct research and publish in both fields.
Psychology reacted relatively early to the replication debate and began to rethink its research practices. I have followed this development closely. This gave me the advantage of becoming familiar with many of the approaches discussed there at an early stage and applying them to the marketing context. In economics, I was therefore probably one of the pioneers who adopted such practices relatively early on.
In preparation for our conversation, I checked again: since around 2017 or 2018, we have been systematically publishing datasets and analysis scripts for our studies. During this time, we also began pre-registering. My first pre-registration dates back to 2018. This coincided with the wider adoption of relevant infrastructure, such as platforms like the Open Science Framework or AsPredicted.
When you consider that many projects take several years to reach publication, this work often traces back to studies that began shortly after 2015. In that respect, I would say that I picked up on these developments relatively quickly.
In concrete terms, this means for our research practice: where legally possible, we make both datasets and complete analysis scripts publicly available so that other researchers can fully replicate our analyses. We also pre-register confirmatory studies. This does not happen in every project, as exploratory research naturally remains possible and important. In my view, the crucial factor is making a transparent distinction between what is exploratory and what is confirmatory in nature. Since 2018, our confirmatory experiments have been subject to pre-registration.
How has the response been within your academic community? What has been your own experience since you started pre-registering or publishing datasets and analysis scripts?
JL: I actually find the response is split, depending on the discipline. In social psychology, it has now largely become the norm to pre-register studies and make data available. If you do not do this, you have to explicitly justify it in many journals. And if there are no convincing reasons for not doing so, this can, in extreme cases, lead to a paper not being accepted. In that respect, I would say that a relatively clear expectation has become established in social psychology. Many researchers see these practices as a sensible standard, and anyone who deviates from them must explain why.
In marketing, the picture is more mixed. There, too, there are numerous colleagues who support and actively implement open science practices. At the same time, I encounter the view – even more frequently than in psychology – that this primarily means additional effort. The community is more divided on this issue. Some see it as an important step for research practice and research quality, whilst others are rather cautious about it. These differences are also reflected in the publication process. Whilst in psychology it often becomes significantly more difficult to publish without appropriate transparency practices, the reaction in marketing varies. Sometimes reviewers highlight the availability of data and analysis scripts positively; sometimes it is not commented on at all.
Does pre-registration also have a practical benefit for you, for example in structuring your own projects?
JL: I consider pre-registration to be extremely valuable for my own research work. One benefit of pre-registration, for example, is that it forces me to engage much more thoroughly with the study design in advance . I am compelled to formulate precisely which hypotheses are to be tested, what data will be collected, and how the analysis will be carried out. I find this particularly helpful when working with early-career researchers. This process often reveals whether a design is actually viable. It is quite common to realise during the planning stage that, if we implement it in this way, we will end up with a data structure that is difficult to analyse meaningfully later on. Such problems can be easily rectified during the planning phase.
Another advantage lies in documentation. Research projects can span several years, and the people involved often change during this time. PhD students complete their degrees or move to a different institution. In such cases, pre-registration provides clear and traceable documentation. It acts as a reminder. What manipulations were carried out three or four years ago? What did the data look like? Which hypotheses were to be tested? This makes it considerably easier to revisit a project even after a long period of time. I take a similar view on the sharing of data and analysis scripts. When you know that these materials will be publicly accessible, you generally work in a more structured manner and document the individual steps more carefully. This increases the traceability of your own work.
Another point concerns the distinction between exploratory and confirmatory research. Precisely because projects often run over several years, pre-registration helps you to recall later which hypotheses were originally formulated and which analyses were actually planned. This provides clarity on how the results should be presented in the article. In my view, all these aspects contribute to improving one’s own research practice, quite regardless of whether reviewers or editors explicitly require or particularly emphasise such steps. That is why we also adhere to open science standards in marketing projects, even if this is not strictly expected in the respective publication context.
You mentioned that personnel configurations in projects sometimes change, for example when PhD students leave the university. Does pre-registration also facilitate the handover to new project participants in such cases?
JL: That is indeed a practical advantage. In business studies in particular, we work with a relatively large number of PhD students, partly because teaching commitments are extensive. At the same time, these early-career researchers have very good career options outside academia. Consequently, it is often only during the PhD phase that a decision is made as to whether someone will remain in research or move into the professional world. As a result, it is not uncommon for projects to still be incomplete after three or four years, whilst the person involved has already completed their thesis and left the university. In such situations, clear documentation is particularly helpful for handing over to new participants, such as other doctoral students or additional collaboration partners.
Has this also changed your supervision practice? Does collaboration with doctoral students become more structured or efficient, for instance?
JL: I see the greatest advantage not so much in a reduction in the workload involved in supervision, but rather in the fact that fewer empirical dead ends are created. Pre-registration forces you to think very carefully in advance about how an experiment should be designed, what data will be collected and how it can be analysed. Without this careful planning, it sometimes happens that you carry out an experiment, collect data, and only then realise that a crucial aspect of the design was not sufficiently thought through, meaning you essentially have to start all over again. My impression is that such situations become less common when projects are planned more systematically in advance. This has both time and material benefits. Particularly against the backdrop of dwindling resources at universities, it makes sense to prepare empirical studies as thoroughly as possible, rather than collecting data that cannot be used productively later on.
In what other areas have you experienced the benefits of open science practices?
JL: I would highlight two aspects. The first concerns the practical benefits of new methods. If you not only share data or materials but also make methodological tools accessible, this can be very useful for other researchers. In one of our projects, for example, a co-author developed and published the R package ‘imagefluency’. This is now frequently used by other groups. I see this, for instance, in my role as a reviewer, when manuscripts refer to this method. Then you realise that a resource has been created that is actually being used in research practice.
The second aspect concerns the benefits that arise when other researchers adopt open science practices. We conduct replication studies, for example. These are much easier to carry out if the original studies are well documented and, for instance, stimulus materials or datasets are available. The same applies to meta-analyses. We are currently working on a large-scale meta-analysis, and a familiar problem is becoming apparent. Many studies report their results only incompletely. In some cases, even basic statistical details required for a systematic evaluation are missing. For such projects, it would be very helpful if more studies made their data, materials and analytical information transparently accessible.
Another area is perhaps teaching. I occasionally ask students to reproduce analyses from published studies. If datasets and materials are available for an article, students can write their own analysis scripts and attempt to replicate the results. This is a very illustrative way of teaching empirical methods.
You mentioned that you conduct a relatively large number of replication studies. What is your motivation behind this? Is it primarily about learning methodology, or do you want to specifically test whether certain findings actually hold up?
JL: Often, the central question is indeed whether a particular phenomenon really exists or not, or under what conditions it occurs. This perspective has certainly also been shaped by the debate surrounding the replication crisis, which received a great deal of attention following the 2015 Science paper. There are studies, such as , which show that experienced researchers often develop a fairly good sense of which effects are likely to be replicable and which ones one should be rather sceptical about. I, too, occasionally find myself reading results where my first impression is that the reported effect may not be particularly robust.
The reason behind my motivation is that non-replicable findings circulating in the scientific community can have problematic consequences. Many students and early-career researchers base their theses or dissertations on existing studies. If it later transpires that a key finding is not robust, an entire research programme can be thrown into disarray.
We have experienced such situations ourselves. In one project, we wanted to identify a new moderator based on established findings. The original studies had been published in renowned journals. However, when we attempted to replicate the experiments as closely as possible – that is, using the same stimulus material and similar samples – it turned out that half of the established phenomena we intended to use could not be replicated. This also rendered the search for moderating conditions moot, as the basic effect itself did not occur reliably.
If one understands science as a cumulative process, it is therefore important to clarify which phenomena are actually robust and on which findings further research can be meaningfully built. In this sense, replication studies have, in my view, a central function. Another question, however, is to what extent such work is recognised within the publication system and how journals deal with it.
What happens if you cannot replicate a study from a top-tier journal? Are there suitable venues for publication?
JL: The options are limited and have also changed over time. For instance, the top-tier journal *International Journal of Research in Marketing* once had a dedicated section for replication studies. This was later moved to the less prestigious *Journal of Marketing Behaviour*. However, this journal has since been discontinued entirely and no longer exists. Currently, there is still a suitable option in Marketing Letters, where we have already been able to publish a replication study. This is a respected international journal, but not one of the core top journals in the field. My impression is that the top journals are only open to such work to a limited extent. Many editorial policies strongly emphasise that submissions should be innovative and novel. The finding that a published result cannot be replicated is often not regarded as a significant scientific contribution. I consider this to be problematic.
A colleague, for example, once told me about his research into whether certain linguistic patterns in abstracts are linked to replicability. His finding was that the term ‘counterintuitive’ is a relatively good predictor of a finding being difficult to replicate later on. Interestingly, this analysis could not be published in a leading journal because several editors indicated that they had no interest in it. This points to a structural problem. When journals primarily seek surprising and particularly innovative results, a ‘ ’ incentive system emerges that favours precisely such findings. At the same time, however, this also increases the likelihood that some of these results will later prove to be less robust.
But wouldn’t it be particularly relevant – and also fascinating – if a widely cited finding were subsequently found to be irreproducible? Given that many studies build on foundational publications, such a discovery would surely be of considerable news value.
JL: In certain cases, that does indeed work. When it comes to very fundamental phenomena that have been frequently cited and play a central role in the field of research, there is certainly interest in corresponding replication studies. It becomes more difficult with individual, smaller findings that have been published but are not considered central to the field. One often receives feedback that available journal space is limited and that a replication study must be particularly well-justified to warrant this space.
If I understand you correctly, two developments are converging here. On the one hand, the effects being investigated are becoming increasingly specialised and granular because the publication system is heavily geared towards novelty. On the other hand, individual non-replicated findings are often deemed too insignificant to justify a publication of their own. Do these dynamics not reinforce one another? And what possibilities do you see for changing this?
JL: The point you raise touches on what I see as a central problem. In many fields, there is indeed an increasing focus on very specific effects. One reason for this is that many fundamental phenomena and theories have already been relatively well studied. A classic example from psychology is the theory of cognitive dissonance from the 1950s. This theory is very well supported empirically and explains many observations of human behaviour. When a discipline has such established theoretical foundations, it becomes more difficult to generate truly new and fundamental insights.
Under the conditions of a publication system that is heavily focused on novelty and innovation, this often leads to research having to delve ever deeper into detail. This results in experimental designs that investigate very specific effects under certain conditions, for example, only in combination with several moderating factors. Such studies may well be methodologically sound, but the effects are often very small and have only limited practical relevance.
At the same time, the scientific value of robustness tests or replication studies is often regarded as lower than the supposed novelty value of such detailed findings. I see a certain imbalance in this. From a broader perspective, it would be equally important to clarify which phenomena are actually robust and on which findings further research can be meaningfully built.
I therefore consider the fact that the discovery of a new, highly specific effect is often valued more highly than the demonstration of the robustness of an existing finding to be a structural challenge in the current publication system.
How does your academic community react to your meta-studies? Do they lead to discussions, for example at conferences such as the VHB conference?
JL: I get the impression that these topics are now being discussed much more extensively, including at conferences. To me, this is a positive sign for business administration and marketing as a sub-discipline. There are differing positions within the community, but also many researchers who are deeply engaged with issues of research quality and methodological transparency. An important point here is that the open science debate has long been heavily influenced by experimental research. In business administration, however, many researchers work with observational data. These datasets have high scientific value, but present particular challenges for open science practices. Pre-registration is more difficult to implement in this context because analytical decisions can often only be made during the course of the data analysis. There are initial approaches to how these challenges might be addressed. Overall, however, the discussion in this area is still in its infancy.
If there are differing positions within the community – some actively promote open science, others are more cautious – are there places where these perspectives actually come together and are discussed?
JL: Naturally, exchanges generally take place at conferences. At the same time, there are situations where one realises that these discussions could be brought together more effectively. One example is a conference of the Association for Consumer Research in Paris in 2024. There were two parallel sessions there, one entitled ‘Meet the Editors’ and one on the topic of Open Science. Both ran at the same time. In my view, that was a missed opportunity. Had these formats been combined, two groups would have engaged in dialogue: on the one hand, researchers who are deeply engaged with issues of research quality and open science practices, and on the other, editors whose decisions have a significant influence on publication standards.
Are these differing positions primarily a generational issue? Or why is it that one part of the community actively supports Open Science, whilst others are more cautious?
JL: A generational aspect certainly plays a role. Many younger researchers have grown up with these debates and tend to view transparency practices as a natural part of scientific work. At the same time, I can also understand why researchers who have worked under different conditions for many years might initially react with caution. When the rules of the game change over the course of one’s career, a certain degree of scepticism is understandable. Added to this is the fact that open science practices can indeed entail additional effort. Pre-registration, documenting analyses or preparing datasets require time and care. At the same time, they make certain approaches that were once common – such as conducting many studies and subsequently selecting those results that prove statistically significant – more difficult. Today, the requirements in some fields are already significantly stricter. In psychology, for instance, journals scrutinise very closely whether certain transparency standards are being met. This can mean additional effort in the publication process, for example when new guidelines are introduced whilst a manuscript is already in preparation. I nevertheless consider this development to be sensible. But it is understandable that not all researchers adapt to new rules at the same pace.
I see some parallels between business administration and psychology. In psychology, the replication debate has at times even led to the discipline itself being called into question. At least within the scientific community. At the same time, there have long been discussions in business administration about the extent to which the subject is scientifically recognised. Time and again, the question is raised as to whether business administration has the same scientific status as other disciplines, or whether its presence at universities is primarily strategic, due to the high number of students. Against this backdrop, I wonder: could a focus on reproducibility and transparency – for instance, in the spirit of Open Science – not also help to strengthen business administration’s scientific standing within the academic ecosystem? Or are these merely external observations?
JL: I find this idea plausible. My perception is that business administration has, in any case, developed strongly in a behavioural science direction in recent years. We are seeing a trend towards behaviouralisation, and not just in marketing. There is now, for example, behavioural accounting and behavioural finance. Many of these fields employ methods that originally stem from psychology, such as behavioural experiments and the investigation of underlying psychological mechanisms. In this sense, a large part of business administration now operates within the broader field of social and behavioural sciences. And I have the impression that, in many sub-disciplines, academic standards have also evolved significantly over the last ten to fifteen years.
Against this backdrop, I agree with you. A stronger commitment to transparent and accountable research practices – as discussed within the framework of Open Science – could help to further strengthen the academic standing of business administration. It would make it clearer that the discipline systematically contributes to robust scientific findings and therefore has a firm place at universities.
You mentioned earlier some pioneers in marketing research. Who comes to mind in this context, particularly in the field of Open Science?
JL: One name that immediately springs to mind is Uri Simonsohn. Among other things, he developed the concept of Specification Curve Analysis, which I consider a very interesting approach, particularly when dealing with observational data. The basic idea is to systematically lay bare the analytical decisions researchers can make. For example: Are outliers excluded or not? Is the dependent variable transformed or not? Which control variables are taken into account? In practice, each of these decisions can turn out differently.
Specification Curve Analysis makes this decision space explicit. First, one defines all analytical variants that appear methodologically sound, and then carries out all the corresponding analyses. This quickly results in a very large number of possible evaluations, often several hundred or even a thousand combinations. The results are then evaluated to see how they vary across these different specifications. How often does a significant effect occur in the expected direction? In how many cases is there no effect, or even an opposite finding?
I consider this approach to be very valuable because it provides transparency regarding the impact analytical decisions have on the results. Particularly with observational data, where there are many possible analytical approaches, this can make an important contribution to scientific reproducibility. Researchers can still argue which specification they consider particularly plausible. But at the same time, it becomes clear under which conditions a result is robust and under which it is not. Uri Simonsohn is also one of the authors of the Data Colada blog, which regularly addresses issues of scientific methodology and research integrity. Alongside Uri Simonsohn, Leif Nelson and Joe Simmons are also involved. On this blog, they regularly write about issues of robustness and replicability in scientific studies. The blog gained prominence, among other things, in connection with the Francesca Gino case. The team behind Data Colada had analysed and published irregularities in several of her datasets. Consequently, Gino was accused of data manipulation. She responded by filing a lawsuit against the blog’s authors.
In your view, what significance does the Gino case have for the open science debate?
JL: The case touches on a fundamental question. When someone takes legal action against the scrutiny of their own findings, it inevitably raises the question of how openly scientific debates can be conducted. The lawsuit was ultimately dismissed, but the discussion it sparked remains relevant.
What impresses me about Uri Simonsohn in this context is that he not only showed backbone, but also made a constructive methodological contribution with Specification Curve Analysis. That is what the open science debate is all about. It is not primarily about making life difficult for others, but about offering practical solutions for better research. For studies using observational data, for example, such an analysis could be included in the online appendix as a systematic robustness check. This provides transparency regarding how stable a result is across different analytical decisions, far more so than the selective robustness checks that are frequently seen at present.
What advice would you give to your PhD students or young researchers in general? In your view, what are the most important reasons for embracing open science practices?
JL: A common misconception is that open science means you are only allowed to conduct confirmatory research and must strictly adhere to what was pre-registered. That is not the case. The central idea is rather transparency – clearly distinguishing and openly documenting what is exploratory and what is confirmatory in nature. This also means that one is permitted to deviate from a pre-registration, provided that this deviation is documented in a comprehensible manner. In psychology, there are editors who explicitly commend studies that are reported transparently, even if they contain null effects. Transparency can therefore certainly be an argument in favour of accepting a paper, provided that reviewers and editors are open to this.
Another important point concerns the culture of error. Errors are inevitable in the research process, but they are fundamentally different from deliberate data fabrication. If open science practices reveal unintentional errors, this should not be regarded as a scandal, but as an opportunity for correction. Early-career researchers in tenure-track systems, in particular, are under immense pressure. Open science practices such as pre-prints and freely accessible data can help to identify errors at an early stage without this leading to a reputational problem.
Ultimately, for me, open science is a sign of collaboration. Science is a cumulative process. Every study provides a small building block that only gains significance when combined with others. Those who show how they have worked enable others to build on their work and strengthen the shared knowledge base.
Thank you very much!
The interview was conducted on 19 February 2026 by Dr Doreen Siegfried.
This text was translated on 18 March 2026 using DeeplPro.
About Prof. Dr Jan Landwehr:
Jan R. Landwehr has been Professor of Business Administration with a specialisation in Marketing at Goethe University Frankfurt since 2012, where he heads the Chair of Market and Consumer Psychology. Since 2015, he has also been Academic Director of the Master’s programmes in Business Administration and Management Science. As a psychologist and economist, Prof. Landwehr pursues an interdisciplinary research approach and has published his research findings in leading international journals in the fields of marketing and social psychology. Professor Landwehr is also known as a dedicated lecturer. His courses at Goethe University have already been honoured with 21 Best Teaching Awards from the Department of Economics.
Contact: https://www.marketing.uni-frankfurt.de/professoren/landwehr/prof-dr-jan-landwehr.html
ORCID.ID: 0000-0001-5433-8865
ResearchGate: https://www.researchgate.net/profile/Jan-Landwehr
