Qualitative research can be reproduced

Ingo Rohlfing talks about his experience with Open Science

Photo of a diary and various office supplies

Three key learnings:

  • A good documentation of your research relieves pressure and helps your future self retrace your own research. A research diary helps credit the contributions of individual team members transparently.
  • It is quite possible to reproduce qualitative research.
  • The sooner students are introduced to good scientific practice, the better. Good practice is thus internalised and there’s no way back.

What has been your experience with transparency in research and reproducibility?

IR: It’s hard work because you have to document everything precisely. I have to write down what has been done, why it has been done, and who has done it if you’re working in a team. Ideally you also document considerations leading to decide against something. If I can reconstruct my steps later, it is much easier than starting from scratch again. But it’s also worthwhile for a very simple reason: I need to remember much less and with the help of my research diary I can still retrace my own research years later and build on that. But it requires discipline.

Do you share this documentation with team members?

IR: Certainly. If you work in a team, the documentation is the institutional memory, so to speak, which everyone can share in. One of the benefits of this documentation is that you can make the individual roles transparent in the publication. The bigger the team is, the more important it is, because not all team members are necessarily authors, too. It’s important to write down from the beginning who coded the data at the start of the project. By means of the research diary I can make the contributions of everyone transparent.

Can qualitative studies be reproduced, and if yes, do you do it?

IR: Yes, that’s possible. We have a project right here that we’re pursuing with a small team. We take published qualitative papers that are so grounded in the theory of science as to have a claim to reproducibility. We verify if the cited sources are still accessible. Have the sources actually been cited correctly? Because sometimes mistakes creep in. That is a very simple, technical level where we don’t look at the content or the arguments; we simply check if we can create the same source material to reproduce the findings. For a selection of articles we also want to look at the content to see if we can come to the same conclusions with the same sources as the primary authors. So, to return to your question: it is possible but it depends on how the article is placed with regard to the philosophy of science.

Transparency in research and reproducibility are key issues in your research and teaching. What are you interested in and what have you found out?

IR: What we actually do depends on whether it is a qualitative or a quantitative course. In a qualitative course, students are given an excerpt from an article and told to find the sources and to check if they come to the same conclusions as the author. One of the results is that 20 students do not agree with each other or with the author, although they all work from the same basis. Students are to learn from this that conclusions can vary even if you put much effort into documenting everything transparently. It’s like a jury in court where twelve people disagree although they have all heard and seen the same things. It’s just different experiences playing a role in the interpretation. It works in a similar way in quantitative research. For example, students can try to find the datasets for papers and import them. It often results in problems with reproducibility. Students are to get an idea of how a seemingly simple thing, like downloading and importing a dataset, can be a big obstacle.

When students come to different conclusions, do they reflect on it?

IR: Usually it’s not done systematically, for reasons of time. It would be a good project, of course. They should argue or disclose why they interpret this one way or the other. Then they understand at least on a basic level why there is often disagreement even if you have done everything in the same way.

Do you do your own research into research transparency and reproducibility?

IR: Yes. In a project that is just getting started, we are trying to do different depths of analysis for a larger number of qualitative published articles. We want to see how much has been cited, how many different sources have been cited, if page numbers have been given. In another project, also in the starting phase, we want to apply Open Science techniques such as registered designs or crowd-sourced coding in a qualitative context. We want to see if instruments that have so far been tested only in quantitative research can be transferred to qualitative research. And how far this can contribute to transparency and more robust findings.

Where did you get the idea and what exactly is the subject of your research?

IR: There has been relatively little methodical research into how to make empirical research as comprehensible and reproducible as possible, and whether it can be reasonably done at all. That could be a result in itself. The idea was to use the negotiations for a Jamaica coalition in 2017 as a use case, because from a theoretical point of view it was very surprising that the three parties didn’t come to an agreement. We’re trying to apply and to evaluate different techniques in an empirical qualitative research process: how well did it work, what could have been done differently, what makes no sense, etc. We’re going to write a pre-analysis plan and to register the design for the case study ourselves. We’re going to define which archives are to be consulted, with whom we want to conduct interviews, we’re going to define the interview guidelines in advance and how to analyse and code the interviews, etc. And only after this we will start collecting the data. There are registered designs for case studies, but they are few. So we will be able to say that’s how we did it. The collected materials can be analysed in different ways, following techniques used in quantitative research.

Why have you chosen transparency in research as a research topic?

IR: I can no longer say exactly how I got into this. Probably it was actually via Twitter, because the discussion about certain research practices and transparency in psychology was kicked off by Daryl Bem’s paper. I was interested and I noticed that my own research was not as transparent as it could or should have been. Transparency has many benefits, for oneself as well as for allowing access to people outside the science system, for example politicians, journalists or laypeople.

What is the role of research integrity, P-hacking etc. in methodological training? Are students or PhD candidates interested?

IR: They are certainly interested in it, if you give them a closer understanding of it. Bachelors need basic knowledge of statistics. The sooner you alert students to problems in the context of statistics and sensitise them to questionable practices, the better. You can build on this in master studies and have students write a pre-analysis plan, for example. And you have to do this definitely for PhD students because they want to do actual research and to publish. The sooner students are introduced to good scientific practice, the better. Good practice is thus internalised and you just do it.

You have signed the DORA Declaration (Declaration on Research Assessment). To what extent is evaluation of research output a topic for discussion in your discipline?

IR: It’s hard to tell to what extent it is dicussed in the discipline. It still matters most in appointment or tenure track procedures. Articles will be read, but in the end it’s the name and reputation of the journal that counts. And those depend on the impact factor.

The debate around the replication crisis in psychology has sent shockwaves through Germany and spilled over into economic research. Do you talk to other researchers in your discipline about approaching the topic differently? That the discipline must become more robust?

IR: Not on a fixed institutional basis, but through research contacts or normal exchange. There are many initiatives already, in Berlin or Mannheim, that radiate. But to my knowledge there is no organisation, institution or movement yet which approaches the German Research Foundation, The Federal Ministry of Education and Research or the German Rectors’ Conference and says: something needs to be done at a wider level.

How do you see the future development? Will there be a big revolution in the next two years towards Open Science, reproducibility, robustness, transparency?

IR: Revolution – I don’t think so. It’ll be more an evolution. In my view an important reason is that you do research in the way you have been socialised. So if you teach the new generations of scientists about Open Science and they just practice it because they don’t know any other way, then it will slowly prevail. Very slowly and more in a timeframe of 10–15 years.

Thank you!

Dr Doreen Siegfried conducted the interview on 16 December 2022.

About Professor Ingo Rohlfing:

Ingo Rohlfing is professor for methods of empirical social research at Passau University. His primary research and teaching interests are social science methods with an emphasis on causal inference, qualitative and multimethod research, and research transparency and reproducibility. He is interested in political parties and party policy. His goal is to explain the ideological change of parties over time in a cross-sectional perspective.

Contact: https://ingorohlfing.wordpress.com

ORCID-ID: https://orcid.org/0000-0001-8715-4771

Twitter: https://twitter.com/ingorohlfing

ResearchGate: https://www.researchgate.net/profile/Ingo-Rohlfing

GitHub: https://github.com/ingorohlfing




to Open Science Magazine