How can researchers and publishers ensure all contributors are credited for their work?
Research is increasingly a complex and collaborative process involving a team of people with specialized skills, such as statistical analysis, data modelling, and project management. But how can we ensure that all contributions are recognized and visible when the traditional model for a scholarly output is often not fit for purpose?
There is growing interest among researchers, funding agencies, academic institutions, editors, and publishers in increasing both the transparency and accessibility of research contributions.
This blog post looks at how Contributor Role Taxonomy (CRediT) improves transparency to ensure a range of contributors are recognized in scholarly outputs with guest author Kathrine Jensen.
Kathrine Jensen is an anthropologist working in higher education, currently Strategic Projects Officer at the University of Sheffield focusing on research culture. With experience in qualitative research, project management, scholarly communication, and evaluation, Kathrine specializes in planning, evaluating and evidencing research impact.
Connect with Kathrine on Twitter: @kshjensen
Using CRediT to recognise a range of contributors in a consistent and structured way
CRediT (Contributor Roles Taxonomy) is a high-level taxonomy that can be used to represent the typical roles of contributors and describe each of their specific contributions to the scholarly output. CRediT proposes a standardized approach with 14 contributor roles and therefore an approach that lends itself to quantification. The system aims to capture the range and nature of contributions to scholarly published output in a transparent, consistent, and structured format. So where ORCID allows for authors to be consistently identified across scholarly outputs, CRediT adds even more detailed descriptive metadata with the roles. According to the recently published approved standard by the American National Standards Institute, the roles are defined as:
Conceptualization – Ideas; formulation or evolution of overarching research goals and aims.
Data curation – Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary to interpret the data) for initial use and later re-use.
Formal analysis – Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data.
Funding acquisition – Acquisition of the financial support for the project leading to this publication.
Investigation – Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection.
Methodology – Development or design of methodology; creation of models.
Project administration – Management and coordination responsibility for the research activity planning and execution.
Resources – Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools.
Software – Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms, testing of existing code components.
Supervision – Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team.
Validation – Verification, whether as a part of the activity or separate, of the overall replication/reproducibility of results/experiments and other research outputs.
Visualization – Preparation, creation, and/or presentation of the published work, specifically visualization/data presentation.
Writing (original draft) – Preparation, creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation).
Writing (review & editing) – Preparation, creation, and/or presentation of the published work by those from the original research group, specifically critical review, commentary or revision – including pre- or post-publication stages.
How was CRediT developed and who was involved?
In 2012, the Wellcome Trust and Harvard University co-hosted a workshop to bring together academic, publishing, and funder community members interested in exploring alternative contributorship and attribution models (Hames, 2012). The National Information Standards Organization (NISO) and others started development work (Allen, 2014). The Contributor Roles Taxonomy (CRediT) was launched in 2014, a NISO working group in 2020, and the standard was approved on January 14, 2022, by the American National Standards Institute.
Who is using CRediT?
According to NISO, CRediT is already used by an impressive range of scholarly publishers representing thousands of journals. Adopters range from BMJ, Elsevier, Oxford University Press, Springer, University of Glasgow, and Gates Open Research. For example, CRediT is integrated into the editorial manager software Aries, used by Elsevier, Wiley, Taylor & Francis, Wolters Kluwer, and Springer Nature.
In an ORCID blog post about CRediT, Veronique Kiermer of PLOS (a nonprofit, open access publisher) describes CRediT as:
“essential to promote recognition for individual researchers in team science” and goes on to say that in adopting the taxonomy at PLOS, “...it was really important to display the contributions of each author, in a human- and machine-readable way, in all publications...”
There has been some pushback on CRediT as critics have highlighted the limitations of the roles (Matarese and Shashok, 2019) and in a post on the LSE Impact Blog, Elizabeth Gadd urges some caution in terms of the potential for the CReDIT approach to become yet another research evaluation metric that may disadvantage some contributors and which is based on data that is not necessarily open for scrutiny (Gadd, 2020).
Standardizing contributions - check which publishers are involved
Making sure that everyone involved in research is recognized for their contributions and that their contribution is visible is of key importance for researchers and publishers, funders, academic institutions, and others. CRediT offers a way to standardize a range of contributions that can support more transparent reporting of scholarly outputs, for example, when there are multiple authors, improve the ability to track the outputs and contributions of grant recipients, and reduce the potential for author disputes.
Check out the list of adopters and see if your publisher is part of the process. Academics and researchers can also find out more about how to implement CRedIT.
Allen, L., Scott, J., Brand, A. et al. Publishing: Credit where credit is due. Nature 508, 312–313 (2014). https://doi.org/10.1038/508312a
Gadd, E. (20th Jan, 2020) CReDIT Check - Should we welcome tools to differentiate the contributions made to academic papers? LSE Impact Blog CRediT Check – Should we welcome tools to differentiate the contributions made to academic papers? | Impact of Social Sciences (lse.ac.uk)
Hames, I. Report on the International Workshop on Contributorship and Scholarly Attribution (Harvard University, Wellcome Trust, 2012). International Workshop on Contributorship and Scholarly Attribution (harvard.edu)
National Information Standards Organization. (14 Jan, 2022) CRediT, Contributor Roles Taxonomy ANSI-NISO-Z39.104-2022.pdf Published by the National Information Standards Organization Baltimore, Maryland, U.S.A. ISSN: 1041-5653
Demain, P. (22nd April 2021) Giving CRediT where CRediT is due. Giving CRediT where CRediT is Due - ORCID
Matarese, V.; Shashok, K. Transparent Attribution of Contributions to Research: Aligning Guidelines to Real-Life Practices. Publications 2019, 7, 24. https://doi.org/10.3390/publications7020024
Aries Editorial Manager CRediT integration CRediT-FAQ_8.5x11.pdf (ariessys.com)