ChatGPT Can Generate Research Papers, but Sources May Not Be Reliable
ChatGPT may seem like the panacea for stressed high school or college students who want to let the AI tool write their papers for them, but several professors interviewed by npr.org said that’s not a good idea.
While many are singing the praises of ChatGPT in delivering information in a cogent form quickly, when it comes to creating research papers, it’s not ideal, according to Teresa Kubacka, a data scientist based in Zurich, Switzerland. Kubacka had her own experience in using the AI app by asking it about a made-up scientific phenomenon.
“I deliberately asked it about something that I thought that I know doesn’t exist so that they can judge whether it actually also has the notion of what exists and what doesn’t exist,” she told writer Emma Bowman of NPR.
Its response was an answer that sounded plausible but was clearly wrong because the question was about a non-existent scientific phenomenon. In addition, when Kubacka looked at the source material, it too was fake…”There were names of well-known physics experts listed – the titles of the publications they supposedly authored, however, were non-existent,” she said.
Kubacka warned that scientific papers would not be reliable if generated by the app.
“This is where it becomes kind of dangerous,” Kubacka said. “The moment that you cannot trust the references, it also kind of erodes the trust in citing science whatsoever,” she said.
The inaccurate generation of information is called “hallucinations,” according to AI experts. Other professors and researchers also found issues with relying on ChatGPT for data.
“There are still many cases where you ask it a question and it’ll give you a very impressive-sounding answer that’s just dead wrong,” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who ran the research nonprofit until recently. “And, of course, that’s a problem if you don’t carefully verify or corroborate its facts.”
According to a story on theverge.com, ChatGPT can be easily tricked into providing erroneous information:
“…the software also fails in a manner similar to other AI chatbots, with the bot often confidently presenting false or invented information as fact. As some AI researchers explain it, this is because such chatbots are essentially “stochastic parrots” — that is, their knowledge is derived only from statistical regularities in their training data, rather than any human-like understanding of the world as a complex and abstract system.”
OpenAI, the parent company, has warned users that the app is in an early phase and the free preview “may occasionally generate incorrect or misleading information,” harmful instructions or biased content.
read more at npr.org