Tech News
← Back to articles

A ten-year drive to credit authors for their work — and why there’s still more to do

read original related products more articles

A decade ago, we and others launched a tool for clarifying the roles of each author of a research paper. The Contributor Role Taxonomy (CRediT) includes 14 types of contribution, from conceptualization to software and data curation. It was designed to prevent questionable authorship practices and make it easier for researchers to demonstrate the diversity of their contributions to science, among other benefits.

Credit where credit is due

This year, taking stock, we’ve shown that adoption has risen steadily (see ‘More CRediT is being given’)1. By 2024, CRediT information was included in nearly 850,000 publications (encompassing articles, preprints and conference papers) — around 22% of the 3.7 million publications recorded last year in Digital Science’s Dimensions, a database of scholarly publications.

This level of uptake is remarkable, given that there have been no coordinated efforts or mandates from publishers and funders. But the issues that the taxonomy was conceived to tackle remain rampant in the research literature. Here we call for CRediT to become the norm, to support researchers and research integrity across the whole academic landscape.

CRediT is still needed

Despite widening use of CRediT, authorship conventions in scholarly publishing remain opaque and confusing, and differ by discipline. They typically provide little to no information about who contributed what in a study (see ‘The parts we played’). A name’s position in a list of authors is an unreliable indicator of the significance of that person’s work or the time they spent, particularly when the author list is long or alphabetical, as is common in economics2 and for large collaborations, and as also occurs in biomedical research3.

Stop treating code like an afterthought: record, share and value it

Questionable practices, such as including honorary authors, who are named but have not contributed, and excluding ‘ghost authors’, who have contributed but are not named, also remain prevalent — perhaps occurring in as much as one-fifth of biomedical papers4.

Meanwhile, the volume of misconduct allegations and retractions in research is skyrocketing. When results are questioned after publication, transparency as to who did what helps investigators, supports accountability and can help to foster a responsible authorship culture more generally5.

CRediT data can also be used to inform policy interventions that help to drive innovation, equity and impact in science. Data on author contributions have been used to study gender and the division of labour in research6,7, as well as variations in the distribution of roles across disciplines8, for example.

... continue reading