Long author-lists on research papers are threatening the academic work system
Now that academic papers are written by thousands (yes, thousands) of contributors, it's getting hard to tell workers from shirkers. Ernesto Priego reports on 'hyperauthorship'
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.This month, a scientific paper by teams working at the Large Hadron Collider at CERN set the record for the number of authors on a paper: more than 5,000 contributors. In the same week, a genomics paper had more than 1,000 authors. The trend of increasingly long author-lists on research papers is clearly getting out of hand. But in addition to being impractical, it is also threatening the entire system by which academic work is rewarded.
Scientific publications have traditionally been the pinnacle of success in academia. Arguably, they are the main vehicle for academics to communicate their research to each other and, ideally, the world. Decisions about hiring, and academic career progression, are also still judged largely on one's publication record.
However, these days, research papers are increasingly collaborative – and a big number of authors can boost their reach, readership and eventually citations. So, many worry that long author lists can be a strategy to "game" the impact of individual papers, or to exponentially increase each author's publication lists.
This will make it harder for universities and funding agencies to assess researchers based on those records. In addition, if the same rules for assessment are used across fields, this can leave fields where single authors or smaller teams are still the norm at a disadvantage. For this reason, we need to fundamentally rethink the concept of authorship, especially when it comes to large-scale collaborations.
The shift towards multiple authors – in biomedicine as well as in high-energy physics – has been going on for some time and is now dubbed "hyperauthorship". But information scientist Blaise Cronin, who also coined the term, argues that attitudes to the trend vary across fields.
For example, publishing in high-energy physics is mostly conducted by large teams spanning several institutions and even countries. Often, it makes make sense to have a large number of authors, and researchers are often comfortable with it. In biomedicine, however, there is more concern about the possibility of fraud – especially the addition of people as authors who have done no work on the project. There is also concern about data integrity and quality control. And in both fields they struggle with how best to provide credit when co-authorship is counted not in dozens, but hundreds and thousands. Meanwhile, even in the humanities, an increasing reliance on data is leading to more collaboration and less work by lone scholars.
Even taking into consideration that, in some fields, thousands of authors for a single paper has been the norm for some time, it seems essential to change the way authorship is attributed. Listing students and other collaborators in the acknowledgements rather than in the author list is an alternative. But to truly leave the classical ideal of the lone scholar behind, authors in very large collaborations, as well as scholarly publishers, could consider just crediting the collective project's name.
What is at stake is not merely a question of academic ego, but the system to reward academics based on their work. And in fact, for the changes to work, the whole scholarly communications, dissemination and reward system needs to be radically renovated. Funding bodies and universities cannot keep relying on publication lists and, in particular, citations as the main measures for academic success. Collaboration also needs to be more actively rewarded in its own right.
Hyperauthorship has transformed – and eroded – the concept of authorship having a unique value, and it cannot be taken to mean the same thing as it used to. There are no easy solutions, but embracing difference, rather than uniqueness, should be a start.
A version of this article first appeared on theconversation.com
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments