Connect with us

Hi, what are you looking for?

Technology

Some scientists can’t stop using AI to write research papers

Analysis of scientific articles suggests that generative AI may be increasingly involved in writing scientific literature. Two recent academic papers highlight a growing trend of AI usage in research paper composition. One study, authored by Andrew Gray from University College London and published in March, estimates that about one percent of papers published in 2023 were partially written by AI. Another paper from a Stanford University team, published in April, suggests a wider range of AI involvement, between 6.3 and 17.5 percent, depending on the subject matter.

Both studies examined word choices commonly favored by large language models (LLMs), such as “intricate,” “pivotal,” and “meticulously.” By analyzing the frequency of these words in scientific literature and comparing them to less AI-favored terms, the studies detected an increasing reliance on machine learning in academic publishing.

Gray’s research observed changes in the use of control words like “red,” “conclusion,” and “after” from 2019 to 2023, alongside shifts in certain adjectives and adverbs. In particular, words like “meticulous,” “commendable,” and “intricate” saw significant increases in prevalence after 2022, indicating a surge in AI-generated content.

Similarly, the Stanford paper identified notable spikes in the use of terms like “realm,” “showcasing,” “intricate,” and “pivotal” in 2023 compared to previous years. The frequency of these words surged by 80 to almost 160 percent, suggesting a significant uptick in AI-assisted writing.

The studies also explored variations in AI adoption across different scientific disciplines. Fields like computer science and electrical engineering showed higher rates of AI-preferred language usage compared to mathematics, physics, and journals like Nature.

However, concerns have been raised about the ethical implications of AI-generated content in scientific literature. While AI has been used to assist in research processes, employing it to write abstracts and sections of papers raises questions about scientific integrity. Some publishers consider using LLMs for writing papers to be scientific misconduct due to the risk of producing inaccurate text, including fabricated quotations and citations.

Researchers emphasize the importance of transparency and integrity in disclosing the use of AI-generated text in scientific publications. They warn that the widespread adoption of generative AI in academic writing could pose risks to the security and independence of scientific practice.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Technology

Apple is gearing up for a significant refresh of its iPad lineup in 2024, starting with the anticipated launch of the iPad Pro in...

Business

Microsoft Teams had a major hiccup on Friday, causing disruptions and various issues for users. The problem started around 11 a.m. EST and quickly...

Entertainment

Olivia Rodrigo’s Guts World Tour is gaining attention not only for her musical prowess but also for her distinctive fashion choices on stage. Styled...

Business

Shareholders made significant decisions on Thursday regarding the leadership of Norfolk Southern, one of the largest railroads in the United States. While three of...