ChatGPT cannot be credited as an author, says world’s largest academic publisher

by Janice Allen
0 comments

Springer Nature, the world’s largest academic publisher, has clarified its policy on using AI writing tools in scientific articles. Company announced this week that software like ChatGPT cannot be credited as an author in articles published in the thousands of magazines. However, Springer says he has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.

“We felt compelled to clarify our position: to our authors, to our editors, and to ourselves,” Magdalena Skipper, editor-in-chief of Springer Nature’s flagship publication, Naturetells The edge. “This new generation of LLM tools – including ChatGPT – has really exploded in the community, who are rightly excited and playing with it, but [also] using them in ways beyond how they can actually be used at the moment.

ChatGPT and previous major language models (LLMs) have already been named as authors in a small number of published articles, preprints and scientific articles. However, the nature and extent of the contribution of these instruments varies from case to case.

According to one opinion article published in the journal ignorance, ChatGPT is used to advocate taking a particular drug in the context of Pascal’s bet, with the AI-generated text clearly labeled. But at one preprinted paper when examining the bot’s ability to pass the United States Medical Licensing Exam (USMLE), the only acknowledgment of the bot’s contribution is a sentence stating that the program “contributed to the writing of several parts of this manuscript”.

To credit ChatGPT as an author is “absurd” and “very stupid,” some researchers say

In the final pre-printed document, there are no further details about how or where ChatGPT was used to generate text. (The edge contacted the authors but did not hear back in time for publication.) However, the CEO of the company that funded the study, healthcare startup Ansible Health, argued that the bot had made an important contribution. “The reason why we are on the list [ChatGPT] as an author was because we believe it actually contributed intellectually to the content of the article and not just as a subject for its review,” said Jack Po, CEO of Ansible Health told futurism.

Reactions in the scientific community to articles crediting ChatGPT as authors have been overwhelmingly negative, with social media users call the decision in the USMLE case “absurd”, “stupid”, and “deeply stupid”.

Arguments against giving authorship to AI is that software simply cannot fulfill the required tasks, as Skipper and Nature Springer explain. “When we think about the authorship of scientific papers, or research papers, we don’t just think about writing them,” says Skipper. “There are responsibilities that extend beyond publishing, and certainly right now these AI tools are not capable of taking on those responsibilities.”

Software cannot meaningfully account for a publication, it cannot claim intellectual property rights for its work, and it cannot correspond with other scientists and the press to explain and answer questions about its work.

However, if there is broad consensus about recognizing AI as an author, there is less clarity about the use of AI tools write a paper, even with proper attribution. This is partly due to well-documented problems with the output of these tools. AI writing software can reinforce social biases such as sexism and racism and tend to produce “plausible nonsense” – incorrect information presented as fact. (See e.g. CNET‘s recent use of AI tools to write articles. The publication later found errors in more than half of the published copies.)

It is because of these kinds of issues that some organizations have banned ChatGPT, including schools, colleges, and sites that rely on sharing trustworthy information, such as programming Q&A repository StackOverflow. Earlier this month, a leading academic conference on machine learning banned the use of all AI tools to write papers, though it did say authors could use such software to “polish” and “edit” their work. Exactly where to draw the line between writing and editing is tricky, but for Nature Springer this use case is also acceptable.

“Our policy is very clear on this: we don’t prohibit them from being used as a paper-writing aid,” says Skipper. The edge. “The fundamental thing is that there is clarity. About how a paper is put together and what [software] has been used. We need transparency because that is at the heart of how science should be conducted and communicated.”

This is especially important given the wide range of applications for which AI can be used. AI tools can not only generate and paraphrase text, but also iterate on the design of experiments or be used to fine-tune ideas, such as a partner in a machine lab. AI-powered software like Semantic Scholar can be used to find research papers and summarize their content, and Skipper notes that another possibility is using AI writing tools to help researchers for whom English is not their first language. “From that perspective, it can be a leveling tool,” she says.

Skipper says banning AI tools from scientific work would be ineffective. “I think we can safely say that banning everything doesn’t work at all,” she says. Instead, she says, the scientific community — including researchers, publishers and conference organizers — needs to come together to craft new disclosure standards and safety guardrails.

“It is our duty as a community to focus on the positive uses and potential, then regulate and curb the potential abuse,” says Skipper. “I’m optimistic we can do it.”


You may also like

All Right Reserved Businesskinda.com