AI Used By Professional Writers, Researchers, Students Often Undetectable
“Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table.”
The sentence above is a dead giveaway of using a chat AI to write papers. Many people are, including research paper mills—companies that churn out scientific papers on demand—to create more questionable content, according to a story by Amanda Hoover, a writer at WIRED.com.
Though schools forbid students to rely on AI to generate research papers, some organizations and schools allow it to be used as a helper when writing, as long as the writer makes it clear that it was used.
The rapid rise of generative AI has stoked anxieties across disciplines. High school teachers and college professors are worried about the potential for cheating. News organizations have been caught with shoddy articles penned by AI. And now, peer-reviewed academic journals are grappling with submissions in which the authors may have used generative AI to write outlines, drafts, or even entire papers, but failed to make the AI use clear.
Word Soup
“In 2021, Guillaume Cabanac, a professor of computer science at the University of Toulouse in France, found odd phrases in academic articles, like ‘counterfeit consciousness’ instead of ‘artificial intelligence.’ He and a team coined the idea of looking for ‘tortured phrases,’ or word soup in place of straightforward terms, as indicators that a document likely comes from text generators. He’s also on the lookout for generative AI in journals and is the one who flagged the Resources Policy study on X.”
Cabanac is worried about scientific integrity–and AI bastardizing ideas that are cutting edge, so they require a common language.
“We, as scientists, must act by training ourselves, by knowing about the frauds,” Cabanac says. “It’s a whack-a-mole game. There are new ways to deceive.”
The issues could extend beyond text generators—researchers say they are worried about AI-generated images, which could be manipulated to create fraudulent research. It can be difficult to prove such images are not real.
New Study Reveals Differences
As ChatGPT and other generative AI become usable as regular household items, the amount of questionable text will climb exponentially. How does a professor or department manager know for sure that this was not an AI-generated piece of misinformation?
A story on theinformation.com describes how Heather Desaire, a professor of chemistry at the University of Kansas, authored of a study demonstrating a tool that can differentiate with 99 percent accuracy between science writing produced by a human and entries produced by ChatGPT. The team focused on accuracy through using “a narrow type of writing.” Other AI writing detection tools billed as “one-size fits all” are usually less accurate.
The study found that ChatGPT typically produces less complex content than humans, is more general in its references (using terms like others, instead of explicitly naming groups), and uses fewer types of punctuation. Human writers were more likely to use words like however, although, and but.
However, the study only looked at a small number of Perspectives articles published in Science. Desaire says more work is needed to expand the tool’s capabilities in detecting AI writing across different journals.
The team is “thinking more about how scientists—if they wanted to use it—would actually use it,” Desaire says, “and verifying that we can still detect the difference in those cases.”
Now that the genie is out of the bottle, it’s going to be more difficult to separate human writing from AI writing. University and high school teachers find that to be unsettling, so new tools to detect it will be in demand.
read more at wired.com
Leave A Comment