Open Science opens many doors
Professor Jürgen Huber talks about his experience with crowd analysis projects
Three key learnings:
- Replication studies and crowd analysis projects enable high visibility
- Open Science is enriched by international cooperation
- New approaches offer high-profile publishing opportunities for replications
Professor Huber, which Open Science practices are particularly relevant to you?
JH: Over the course of the last years replication studies and now crowd analysis projects have become one of the most important pillars of my work. Everything that touches on Open Science has developed from cautious beginnings to my most productive field and has resulted in the most prestigious publications. It’s wrong to think that Open Science is a side show of science or a waste of one’s time. Early career researchers can very well make a name for themselves here and be very successful and very visible.
How did you find an entry to this field?
JH: In 2016, a Swedish colleague asked my long-time co-author Michael Kichler and myself if we wanted to join a study she was planning. The question was: how replicable are studies in experimental economics. She had planned to take a survey beforehand by means of an electoral exchange. Two more colleagues from the USA and Singapore were involved besides us. Each team replicated fives studies, twenty in all, and they were preceded by electoral exchanges to find out if the community can predict what is replicable and what is not. The result showed that the community understands quite well where findings have been possibly manipulated a little. All in all, the study functioned very well and was published excellently. That was my introduction to Open Science.
JH: Immediately afterwards I replicated more studies, together with colleagues, and again we could place the paper very highly, but then we wanted to do something new. That’s when I came across crowd analysis. This means that several research teams work with the same data and hypotheses and at the end we assess if they produce the same the results. This approach says a lot about the robustness of the findings, but also about the range of possible analysis paths. Since I have long been interested in the field of neuroscience, we also wanted to study how reliably the brain works. Again, we called our Swedish research colleagues who were very excited, and we also invited a neuroscience specialist from the USA to join us, who in turn knew a brain scientist in Israel who could carry out brainscans on our behalf at comparatively low cost. Just like this we had a team on three continents at four universities.
How did the study proceed?
JH: We had 110 brains scanned – a very large number for such studies. Each test person spent two hours in the scanner and decided about 256 lotteries. Then we made the data available to the community. Anyone willing to evaluate our data – which involved around a fortnight of work – became a co-author in the study. At the end we had a paper with 200 co-authors, but published in a very high-ranking journal. We wanted to know how consistent the results are and how well the community can assess them. The first finding was a considerable overconfidence bias: the community estimated that 69 per cent of results would be significant, but the true number was a mere 26 per cent. The second fascinating finding was that none of the research team chose the same analysis path, so we had in fact 70 different analysis paths. We consider such crowd analysis projects so promising that we are currently running another project in the field of finance. We looked for research teams to analyse 652 million futures transactions at Deutsche Börse based on five hypotheses. I wasn’t very optimistic myself about such a challenge, but we now have more than 400 researchers working in 230 teams on this.
This sounds like it it an ongoing process …
JH: In fact one project often arises from the other and many people like this. It also inspires our postdocs and PhD candidates. The community essentially also welcomes this; we have never had negative responses. I get the feeling that in the field of experimental finance a lot is going on very quickly, partly because of our work. It’s wonderful to see that you are helping to move something along. We have also enlarged our network massively and we are now in exchange with many scientists, also from other disciplines. Such replication studies require a lot of resources, and for this you need cooperation partners. It has opened many doors for us and it was an extremely positive and stimulating experience.
Have you ever had problems to publish replications?
JH: We haven’t had problems so far, partly because we were early. But you have to find a new twist if you want to publish with high impact. At first we only replicated twenty studies and combined it with electoral exchanges. That would not get us into a highly-ranked journal today. Crowd analysis is an important further development. And it continues to evolve – our next project is crowdsourcing where we just state the research question and then invite researchers all over the world to suggest designs. Then we’ll let the researchers vote which are to be implemented. Simply replicating a study won’t get you very far in a high-ranking journal. But if you replicate several studies or if you use an additional analytic tool, your chances are much better.
About Professor Jürgen Huber
Jürgen Huber is full professor in Finance and head of the Department of Banking and Finance at Innsbruck University. His research interests are Empirical Finance, Behavioural and Experimental Finance, Information economics, Incentives on Financial Markets and the Behaviour of Financialprofessionals. He is a founding member of the “Society for Experimental Finance”, the world’s largest association of researchers studying financial issues with experiments.