Skip to main content
SearchLoginLogin or Signup

The Finnish Reproducibility Network (FIRN): A national bottom-up approach to scientific integrity in a global context

Published onJul 21, 2023
The Finnish Reproducibility Network (FIRN): A national bottom-up approach to scientific integrity in a global context


Across sciences the lack of reproducibility has raised concerns that shake disciplinary foundations. In this article, the need for institutional solutions as one possible antidote to reproducibility issues is suggested, and Reproducibility Networks (RNs), as a case in point, are explained. In particular, we reflect on the establishment of the Finnish RN as part of a growing international network of RNs, and outline a bottom-up approach, which aims at helping to overcome the reproducibility crisis by distributing awareness of ethical, practical, and other domains of relevant knowledge in places where the future of science is being made: the pedagogical structures of research institutions and societies.

Keywords: Reproducibility; Replicability; Integrity; Rigor; FIRN


Reproducibility: An introduction

Many approaches to the definition of reproducibility have been published, some of which provide competing views of what it means to replicate, reproduce, and re-use results and data [1]. Along with other terms- such as repeatability- they have been used to describe the general concept of the possibility for scientists to corroborate the findings of another experiment. Some authors advocate for the definition of reproducibility as the process of recreating the original study with the same samples, same technology, and same analysis, whereas others propose that this definition should apply to “replication” and reserve the term “reproducibility” to the use of new samples and approaches to validate research findings ([2, 3, 4], please see [1] for overview).

In their report on the “Reproducibility of Scientific Results in the EU” from 2020, the European Commission attempts to define reproducibility as a continuing process ​​which involves the reproduction, replication, and re-use, where the term “reproduction” is used to describe the re-enactment of a study by a third party, using the original set-up, data and methodology of analysis [5]. The term “replication” is applied to a more general re-enactment of the results, using the same analytical method, but on different datasets (e.g., for comparison). In this sense, when successful, reproducing a study with the same samples and analytical methods can increase trust in the original study, while replication with new samples and possibly different analytical methods can even further corroborate the scientific findings of the original study. In the EU report [5], the term “re-use” is drawn on to describe the possibility to apply the results beyond the original research context, both inside and outside the initial scientific discipline (e.g., also for innovation, for transfer, for transdisciplinary research). As the authors highlight, availability of data and transparency of the methods are the foundation for all these processes.

While reproducing previous results is a core principle of the scientific method, the focus on reproducibility stemmed more recently from considerations of irreplicable findings in many different fields: “Psychology’s Replication Crisis and Clinical Psychological Science” [6], “A tragedy of the (academic) commons: interpreting the replication crisis in psychology as a social dilemma for early-career researchers [2, 7], “A reaction norm perspective on reproducibility” [8]. Such publications exemplify the fact that a great deal of published scientific data, methods and interpretations present issues that leave them essentially worthless or at least pose a risk of failure for further research which is based on them (e.g., [4, 9, 10]). While academic scientists are committed to good quality practices in general, they very often work without access to, or knowledge of, guidelines and standards that promote reproducibility. Unawareness of standard experimental, analytical, and reporting procedures and guidelines are some of the possible reasons for the generation of irreproducible results and interpretations, often driven by career pressure, lack of time and shortage of money to repeat a study, and the need to publish quickly [9, 11]. In the art of science, which we can define as the intellectual and practical process of systematically studying the physical, psychological, and social world through observation and experiment, a lot can go wrong. The unintentional introduction of systematic and observational errors, biases, analytical and statistical faults [12], measurement issues [13], as well as questionable interpretation or non-transparent reporting can distort the findings. As it is an integral feature and purpose of science to increase the knowledge about nature, people and human society, it is essential for scientific data, results and interpretations to be reliable. Therefore, measures must be taken to meet the requirements of scientific rigor and integrity which, in many fields, is the ethical “strict application of the scientific method to ensure unbiased and well-controlled experimental design, methodology, analysis, interpretation and reporting of results.” ( Definitions of scientific rigor and integrity necessarily vary in different fields, but when research is done in an open and transparent way (e.g. with pre-registration, sharing of data and materials), the findings can be scrutinized in detail. Such scrutiny can involve the reproduction of a study using the same datasets, samples and analytical methods, and the replication of a study with the same analytical method but with different datasets or samples. Different studies on the same topic can also yield a level of reproducibility if they converge on the same conclusion. Teaching the methods of conducting and testing scientific processes is essential in all scientific disciplines.

The question remains: when is a study successfully reproduced? None of the definitions above attempt to provide criteria as to what is a successful reproduction. Certainly, the criteria need to be defined prior to the study, and vary from scientific discipline to another. Goodman et al. [14] breaks down reproducibility into (a) reproducibility of results (obtaining the same results from the conduct of an independent study whose procedures are as closely matched to the original experiment as possible, in other words “replicability”), (b) reproducibility of methods (the provision of enough detail about study procedures and data so the same procedures could, in theory or in actuality, be exactly repeated), and (c) the reproducibility of the interpretation of the results (“inferential reproducibility”), i.e. the drawing of qualitatively similar conclusions from either an independent replication of a study or a reanalysis of the original study.

In the light of an absence of a consensus on a definition of reproducibility, we will use the European Commission’s terminology for the sake of communication and recommend the use of inferential reproducibility as an important additional aspect.

Why is reproducibility important?

Confidence in research methodology is essential to the sustainability of science. Scientists need to know which data and methodology was used to generate the results, and the public needs to be assured that funding is used for meaningful science that ultimately contributes to knowledge. Carelessness, misconduct, and lack of transparency, among others, undermine reproducibility and diminish the credibility of science - and in the most extreme cases, for example in healthcare, human lives can be compromised [15]. There has been a focus on preclinical studies and psychology in the body of literature of reproducibility [16], but the problem is widespread across scientific domains. Meta-analyses of published research have revealed serious issues in scientific rigor, transparency, completeness of reporting, and analytical and reporting bias across scientific disciplines. One of the well-known examples of cross-cutting survey studies was conducted by Baker [11]. According to her findings, based on responses from 1,576 researchers, more than 70% of the respondents had tried and failed to reproduce someone else’s results, and more than half had failed to reproduce their own results [11].

In a recent effort, the reproducibility of published cancer studies was examined and less than 50% of the eligible studies were successfully replicated [17]. Limited reproducibility of research findings has many negative consequences, including its ability to undermine public trust in science and to slow down the progress of science in cases where new research is built on studies that lack a solid scientific foundation [18]. Time and money are wasted on generating non-reproducible science, or on attempts to replicate experiments that have flaws in data, methodology, and results. In 2015, Freedman et al. [9] estimated that in the USA alone $28 billion is lost every year to preclinical studies that are not reproducible.

It is commonly acknowledged that implementing reproducible research practices (e.g., quality management systems or domain-specific best practices) requires additional research funding, but it is also acknowledged that the benefits of robust science outweigh the related costs [4]. It can be argued that even if problems in reproducibility are nothing new, they are exacerbated by the increasing multidisciplinarity and complexity of scientific research [19]. For instance, conceptual and practical challenges related to the reproducibility of qualitative studies remain debated [20, 21]. Therefore, it is more important than ever to develop robust practical strategies to advance the understanding of the importance of research reproducibility.

Figure 1

The stages of experiments or studies with measures to observe for improving the chances of obtaining meaningful results from them.
Prior to an experiment or study, define the purpose, get informed on the topic including regulatory and ethical aspects and seek for advice. The experiment/study plan needs to be developed, \textit{e.g.}, by developing a work plan with budget, time allocation, and operators. Engagement of experts could be considered for every stage of the experiment or study. E.g. statisticians should be included if in doubt on experimental design issues. As in all other experimental and analytical steps which follow, standard operating procedures should be adhered to, only calibrated instruments and qualified material must be used. Such material includes cell lines, antibodies and other material used to conduct the experiment and study. Document each step completely, including all deviations from the original study plan if needed. In every phase of an experiment or study pay attention to possible influence of bias, get second opinions in case of doubt. Finally, report the complete study, and make your data findable, accessible, interoperable, and reusable, also known as the FAIR principle [26].

How can reproducibility be achieved?

The reasons behind the lack of reproducibility are myriad, but well-established. Ioannidis [22] argued that the failure to replicate research findings is more likely to occur in domains that possess a great flexibility of experimental design or a large number of tested relationships without preselection. Problems, such as low statistical power, can lower the reproducibility of a specific study [23]. Some questionable research practices end up being applied due to lack of knowledge about good research practices and therefore can also be solved by raising awareness. On the other hand, other questionable research practices, such as selective reporting of results or, in the most extreme cases, their falsification, may occur despite the knowledge of good research practice. The underlying reasons behind scientific misconduct are complex and associated with diverse psychological and sociological factors [24]. Some of the factors singled out by Carafoli [25] involve the increase in the number of researchers, including in new geographical areas, the emergence of “predatory journals”, deliberate misuse of statistics and the “publish or perish’’ culture prevalent in contemporary science. Measures to ensure or improve validity and reproducibility of experimental results require a great deal of scientific rigor and awareness of where an experiment can go wrong, and where bias may be introduced. Some general concepts are mentioned in Figure 1. Ultimately, access to data and methods, as well as transparent reporting, are among the most essential prerequisites to evaluate a study and to reproduce or replicate the findings. Researchers must acquire a scientific rigor in their daily practice. This includes research data management practices capable of making data, codes, and other information findable, accessible, interoperable, and reusable, the four pillars of the FAIR principle [26]. Implementation of the ALCOA+ principle, which has been used since the early 1990s to describe principles of attributable, legible, contemporaneous, original, and accurate data recording, in addition to completeness, availability, consistency, and endurability, can be a step towards good data management practice [27]. Although it is impossible, or very challenging at the least, to make most, if not all types of data open source, the knowledge and practice regarding the sharing of qualitative, sensitive, and other rarely published datasets is also increasing [28, 29]. These principles, along with the adherence to good scientific practice, are the foundations of scientific rigor and enforced in quality systems such as EQIPD [30]. They generally pave the way to testable science without the pitfalls of blindly trusting results. The principle of transparent reporting, which is admittedly wider than reproducibility, is particularly well suited to depict intuitively the practices that make research results benefit a larger audience than the executors. Transparency of the research management process contributes to understanding the current practices and methods in general. In the best case, transparent reporting may also help document, report, and then publish null results – an elusive but desirable asset of the research process. There is great epistemic value in such results, both for guidance and for proofing future studies. Transparency is especially needed for research conducted across different sites and institutions, that rely heavily on materials, expensive machinery, and complex protocols to generate data, exposed to unforeseen circumstances and with large staff turnover, among other things. In general, transparency increases trust, also when dealing with results that are not de facto reproducible due to the nature of the research (e.g., original artifacts in archaeology, ethnography, astronomy events), or in cases where open access to data is limited due to the sensitivity of the data. An example of such data are, for instance, genome sequences. While not always directly contributing to reproducibility, considering good practices before the study can help one to build a reproducible study. The concept of “preproducibility” has been introduced to describe a set of measures that aim to ensure accountability and quality at the earliest possible stage [31]. The ‘pre’-phase is crucial for the success of a policy action to increase reproducibility. The ‘pre’ includes documenting the scientific process at the earliest stage of research before results are generated. Pre-registration, data management plans (DMPs), journal and funder guidelines, dedicated grant support, as well as investment in human resources such as reproducibility experts, statistics training, and expert evaluators will be beneficial. More generally speaking, a key to reproducibility is “research integrity”. The term includes adherence to the concepts of open science, sustainable ethical responsibility, fairness, data protection, and competence [32]. As we will discuss in the following section, it is one of the aims of the Reproducibility Networks to establish a culture of discussion between researchers, institutions, funders, journals, and societies where the issues of publication pressure, funding of reproducibility studies, and improvement of scientific integrity can be addressed.

Figure 2

Organizational structure of the Finnish RN.
“Reproducibility Initiatives” can be discussion groups, journal clubs, which debate reproducibility, represented to the RN by “Initiative Speakers”. They can be part of “Research Institution”, which can be universities, faculties, or alike. Together with the Advisory Board and Stakeholders, the Steering Group attempts to network with the individual members and member groups.

Why is a bottom-up approach needed?

Discussing reproducibility, reasons for its failure, and the possible measures for improvement will increase the overall awareness of issues, but concrete actions are needed to improve the knowledge and application of best practice and scientific integrity standards. Stakeholders in research, such as research institutions, employers, and funders, as well as publishers, private industry, and sponsors, have an important role in changing such practices. Some funders, like the Wellcome Trust, have paved the way on research integrity by starting to require a change in both attitudes and practices. Initiatives like the Transparency and Openness Promotion (TOP) Guidelines set standards for journals and funders to better assess quality and transparency of research ( [33]. However, if we want to increase scientific integrity, we need to begin by raising the awareness where science is practiced: at universities and other research organizations. Organizations such as the Reproducibility Networks can help promote these activities among the scientific communities. Currently, perhaps the most important areas to develop and focus on are the courses, resources, and supervision provided for students as well as early career researchers. If research institutions can provide the essential tools for reproducible research from the beginning of careers, supporting the right kinds of practices from the start, the next generations of scientists do not need to waste their productive years on relearning new habits. In practice, institutions can actively work toward this goal by integrating meta-science, reproducibility, and research integrity topics in their existing teaching routines as well as developing new content that explicitly addresses such themes. By including reproducibility as part of project assessments and student evaluations, good practices can be encouraged indirectly.

Rather than merely implementing policy changes, we hope that the discussion and subsequent incorporation of scientific principles into the daily routine will improve the overall awareness of threats to reproducibility and thereby to validity of the scientific work.

As Macleod points out, “If the quality of every scientist’s work could be made just a little better, then the aggregate impact on research integrity would be enormous.” [32]. There are many incentives and traditions counteracting the current movement towards more open and transparent scientific practice. However, changing the way science is conducted and reported starts from within the research community - which is why we need reproducibility networks.

What are the Reproducibility Networks?

The concept of reproducibility networks (RN) has its roots in the UK where it was established in 2019, in response to the belief that collective action is required for tackling the low success rate of replication studies across scientific fields – generally referred to as the “reproducibility crisis”. Indeed, during the last decade the issues of poor reproducibility, undermining the trust in research, have fueled intense debates in the scientific community and general public. In 2015, a symposium on “Reproducibility and reliability of biomedical research: improving research practice” was held in the UK (, and in October 2016, the InterAcademy Partnership for Health published a statement, “A call for action to improve the reproducibility of biomedical research” which paved the way to the birth of first national reproducibility network ( Shortly, similar peer-led, grass-root networks were established in Switzerland and Germany in 2020, and since then the international network of reproducibility networks is growing and expanding (reviewed in 34). Complementing the work of other bottom-up approaches, such as FORRT, a Framework for Open and Reproducible Research Training (, which advocates for the principles of open education towards promotion of reproducible research and democratization of scientific educational resources, or ReproducibiliTea, a grassroot journal club initiative of early career researchers (, Reproducibility Networks provide a structure which can support dissemination of information on scientific best practices across disciplines, and engage its members in discussions on reproducibility issues within the country nodes or internationally. After several briefings with UK, Swiss and German networks, the Finnish Reproducibility Network was founded in June 2021 by researchers at the University of Helsinki and Aalto University. The cooperation with the growing number of other national networks in Europe and Australia is highly encouraging and fruitful. The national networks had the first meeting on September 30, 2021, in Switzerland and agreed on the common principles of the strategy for further development (including joint grant applications). The Finnish Reproducibility Network encourages researchers of all disciplines, funders, and research organizations to consider joining the growing network in Finland and Europe to further strengthen and improve scientific rigor and reproducibility in the scientific community (The Finnish RN can be accessed via In this article, we motivate the need for such regional actions and further explain what these actions may involve. The overarching goal of reproducibility networks (RN) is to help improve research quality, by promoting the concepts of scientific integrity and open science for better reproducibility through facilitating active discussions on the topic across science disciplines and stakeholders such as journals, funders, and industry. RNs have been designed to serve as a community effort to promote transparent and trustworthy research practice in the academic system. In particular, reproducibility networks want to encourage early career scientists to discuss and act on scientific integrity and help them connect with others sharing the same interest. Through the exchange of ideas and initiatives, RNs build a representation to funders and other stakeholder organizations to demonstrate the awareness and consideration of ethical and practical standards of research across all scientific disciplines (Figure 2). We seek to understand the factors that contribute to poor research quality, reproducibility, and replicability, as well as discuss approaches to counter these issues, to improve the trustworthiness and rigor of research. These problems affect all disciplines, so we aim for a broad inter-disciplinary representation. We believe that continuing awareness and discussion of these topics represent an opportunity to improve our research by reforming culture and practice. The network of national RNs will also recognize the need for flexibility to accommodate different countries, institutions, and disciplines under the umbrella of Reproducibility Network. In the end of 2021, the network of the national networks agreed on the common statement which is available at, outlining the overall aim of the activity as follows: “... to grow the family of Reproducibility Networks – both within and across countries – in order to more effectively coordinate efforts to evaluate and improve the research ecosystem, supported by researchers themselves across a range of disciplines”. The establishment of the Reproducibility Networks across the globe is an encouraging development towards the education of scientists in the fields of rigor and integrity, factors that are crucial for meaningful science.




Author Contributions

  • Writing - original draft: All authors

  • Writing - review & editing: All authors

  • Figure 1: Andreas Scherer

Conflict of Interest Declaration

The authors declare no conflict of interest.


VV is supported by the Jane and Aatos Erkko Foundation. VMK received funding from Academy of Finland (312397) and European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (101042052).

Prior Presentation

Preprint: Voikar, V., Casarotto, P., Glerean, E., Laakso, K., Saurio, K. J., Karhulahti, V., & Scherer, A. (2023, January 11). The Finnish Reproducibility Network (FIRN): A national bottom-up approach to scientific integrity in a global context.

Editorial Notes


  • Received: 2022-05-05

  • Revisions Requested: 2022-10-03

  • Revisions Received: 2022-12-03

  • Accepted: 2022-12-19

  • Published: 2023-06-22

Editorial Checks

  • Plagiarism: Editorial review of the iThenticate reports found no evidence of plagiarism.

  • References: A citation manager did not identify any references in the RetractionWatch database.

  • As a Professional Perspectives article, this paper received editorial review and comments from a single reviewer only. Peer review was managed by Episteme Health Inc in 2022 prior to transfer of the journal to the Center of Trial and Error.

Copyright and License

Copyright . Vootele Voikar, Plinio Casarotto, Enrico Glerean, Kati Laakso, Kaisa Saurio, Veli-Matti Karhulahti, Andreas Scherer. Except where otherwise noted, the content of this article is licensed under a Creative Commons Attribution 4.0 International License. You are free to reuse or adapt this article for any purpose, provided appropriate acknowledgement is provided. For additional permissions, please contact the corresponding author.


1. Barba LA. (2018). Terminologies for reproducible research.

2. Bollen K, Cacioppo JT, Kaplan RM, Krosnick JA, Olds JL. (2015). Reproducibility, replicability, and generalization in the social, behavioral, and economic sciences.\_Spring\_2015\_AC\_Meeting\_Pres\\entations/Bollen\_Report\_on\_Replicability\_SubcommitteeMay\\\_2015.pdf

3. Collins FS, Tabak LA. (2014). Policy: NIH plans to enhance reproducibility.

4. Nosek BA, Errington TM. (2020). What is replication?

5. European Commission, Directorate-General for Research and Innovation, Baker L, Cristea I, Errington T, Jaśko K, Lusoli W, MacCallum C, Parry V, Pérignon C, Šimko T, Winchester C. (2020). Reproducibility of scientific results in the EU : Scoping report

6. Tackett JL, Brandes CM, King KM, Markon KE. (2019). Psychology’s replication crisis and clinical psychological science.

7. Everett J, Earp B. (2015). A tragedy of the (academic) commons: Interpreting the replication crisis in psychology as a social dilemma for early-career researchers.

8. Voelkl B, Würbel H. (2021). A reaction norm perspective on reproducibility.

9. Freedman LP, Cockburn IM, Simcoe TS. (2015). The economics of reproducibility in preclinical research.

10. Begley CG, Ellis LM. (2012). Raise standards for preclinical cancer research.

11. Baker M. (2016). 1,500 scientists lift the lid on reproducibility.

12. Ulrich R, Miller J. (2020). Questionable research practices may have little effect on replicability.

13. Flake JK, Fried EI. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them.

14. Goodman SN, Fanelli D, Ioannidis JPA. (2016). What does research reproducibility mean?

15. Gibelman M, Gelman SR. (2001). Learning from the mistakes of others.

16. Harris R. (2017). Reproducibility issues

17. Mullard A. (2021). Half of top cancer studies fail high-profile reproducibility effort.

18. Pusztai L, Hatzis C, Andre F. (2013). Reproducibility of research and preclinical validation: Problems and solutions.

19. Baer DR, Gilmore IS. (2018). Responding to the growing issue of research reproducibility.

20. Karhulahti V-M. (2022). Reasons for qualitative psychologists to share human data.

21. Peels R, Bouter L. (2018). The possibility and desirability of replication in the humanities.

22. Ioannidis JPA. (2005). Why most published research findings are false.

23. Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J, Robinson ESJ, Munafò MR. (2013). Power failure: Why small sample size undermines the reliability of neuroscience.

24. Fanelli D, Costas R, Ioannidis JPA. (2017). Meta-assessment of bias in science.

25. Carafoli E. (2015). Scientific misconduct: The dark side of science.

26. Wilkinson MD, Dumontier M, Aalbersberg IjJ, Appleton G, Axton M, Baak A, Blomberg N, Boiten J-W, Silva Santos LB, Bourne PE, Bouwman J, Brookes AJ, Clark T, Crosas M, Dillo I, Dumon O, Edmunds S, Evelo CT, Finkers R, Gonzalez-Beltran A, Gray AJG, Groth P, Goble C, Grethe JS, Heringa J, Hoen PAC, Hooft R, Kuhn T, Kok R, Kok J, Lusher SJ, Martone ME, Mons A, Packer AL, Persson B, Rocca-Serra P, Roos M, Schaik R, Sansone S-A, Schultes E, Sengstag T, Slater T, Strawn G, Swertz MA, Thompson M, Lei J, Mulligen E, Velterop J, Waagmeester A, Wittenburg P, Wolstencroft K, Zhao J, Mons B. (2016). The fair guiding principles for scientific data management and stewardship.

27. Woollen SW. (2010). Data quality and the origin of ALCOA.

28. DuBois JM, Strait M, Walsh H. (2018). Is it time to share qualitative research data?

29. Chauvette A, Schick-Makaroff K, Molzahn AE. (2019). Open data in qualitative research.

30. Bespalov A, Bernard R, Gilis A, Gerlach B, Guillén J, Castagné V, Lefevre IA, Ducrey F, Monk L, Bongiovanni S, Altevogt B, Arroyo-Araujo M, Bikovski L, Bruin N, Castaños-Vélez E, Dityatev A, Emmerich CH, Fares R, Ferland-Beckham C, Froger-Colléaux C, Gailus-Durner V, Hölter SM, Hofmann MCJ, Kabitzke P, Kas MJH, Kurreck C, Moser P, Pietraszek M, Popik P, Potschka H, Prado Montes de Oca E, Restivo L, Riedel G, Ritskes-Hoitinga M, Samardzic J, Schunn M, Stöger C, Voikar V, Vollert J, Wever KE, Wuyts K, MacLeod MR, Dirnagl U, Steckler T. (2021). Introduction to the eqipd quality system.

31. Stark PB. (2018). Before reproducibility must come preproducibility.

32. Macleod M. (2021). Want research integrity? Stop the blame game.

33. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S, Chambers CD, Chin G, Christensen G, Contestabile M, Dafoe A, Eich E, Freese J, Glennerster R, Goroff D, Green DP, Hesse B, Humphreys M, Ishiyama J, Karlan D, Kraut A, Lupia A, Mabry P, Madon T, Malhotra N, Mayo-Wilson E, McNutt M, Miguel E, Paluck EL, Simonsohn U, Soderberg C, Spellman BA, Turitto J, VandenBos G, Vazire S, Wagenmakers EJ, Wilson R, Yarkoni T. (2015). Promoting an open research culture.

34. UK Reproducibility Network Steering Committee. (2021). From grassroots to global: A blueprint for building a reproducibility network.

No comments here
Why not start the discussion?