Article Describes the Tech Fighting Back Against Chatbots, Text Generators
Just when we thought we had found the answer to unlimited creative content using chatbots or AI-assisted text generators, a writer named Melissa Morrone lets all the air out of that balloon.
Writing for the site fastcompany.com Marrone brings up three different detection tools that are or will soon be available. Teachers, publishers, and even HR departments will be pleased to hear about these programs. Here is how she begins:
“We asked ChatGPT to ‘write an article about tools to detect if the text is written by ChatGPT’ and fed the results to the tools below. All of them determined—or suspected—that the text had probably been written by an AI. Then we fed the text of this article to these same tools—and, thankfully, they confirmed our humanity.”
GPT-2 Output Detector
OpenAI has been wowing the internet with its efforts to replicate human intelligence and artistic ability since way back in 2015. But this past November, the company finally went mega-viral with the release of the AI text generator ChatGPT. Users of the beta tool posted examples of AI-generated text responses to prompts that looked so legit, it struck fear in the hearts of teachers and even made Google worry that the tool would kill its search business.
Marrone goes on to say who better to build a chatbot detector than the engineers who invented them?
The online demo of the GPT-2 output detector model lets you paste the text into a box and immediately see the likelihood that the text was written by AI. According to research from OpenAI, the tool has a relatively high detection rate, but “needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.”
GLTR (Giant Language Model Test Room)
This next detector comes from the minds at MIT.
When OpenAI released GPT-2 in 2019, the folks from the MIT-IBM Watson AI Lab and the Harvard Natural Language Processing Group joined forces to create an algorithm that attempts to detect if the text was written by a bot.
A computer-generated text might look like it was written by a human, but a human writer is more likely to select unpredictable words. Using the “it takes one to know one” method, if the GLTR algorithm can predict the next word in a sentence, then it will assume that sentence has been written by a bot.
Last week Seeflection.com mentioned the inventor of this final program that Marrone includes in her article.
Edward Tian was busy on New Year’s Eve creating GPTZero, an app that can help determine if a text has been written by a human or a bot. As a 22-year-old senior at Princeton, Tian understands how college professors might have a vested interest in detecting “AIgiarism” (plagiarism, but with the help of AI.)
Tian says his tool measures randomness in sentences (“perplexity”) plus overall randomness (“burstiness”) to calculate the probability that the text was written by ChatGPT. Since tweeting about GPTZero on January 2, Tian says he’s already been approached by VCs wanting to invest and will be developing updated versions soon.
Up next in the chatbot versus detector saga? Watermarks.
read more at fastcompany.com