The over-the-top responses for the new Bing ChatGPT remind Salon columnists of the tulip mania that brought the financial crisis to Holland centuries ago. Photo from Zaanse Schans, Netherlands. (Source: Adobe Stock)

ChatGPT Being Mostly Right Is Not Good Enough, Say Writing Duo

It has only been three months. On November 30, 2022, OpenAI announced the public release of ChatGPT-3, a large language model (LLM) that can engage in astonishingly human-like conversations and answer an incredible variety of questions. Three weeks later, Google’s management — wary that they had been publicly eclipsed by a competitor in the artificial intelligence technology space—issued a “Code Red” to staff.

Seeflection.com first reported this wave of concern that rippled through Google in December 2022. We have all seen the headlines about how these generative programs have become an overnight sensation. From colleges to publishers to high school teachers grading essays, the fact is, this interactive AI is bending our realities just a bit.

And the ability of ChatGPT to write essays, answer questions, and do it instantly and seamlessly is nothing short of magical. However, it seems possible we have heralded this golden age of AI just a little too early.

Two writers for salon.com have pointed out a few of the flaws that the world at large and investors, in particular, may have overlooked.

Jeffery Lee Funk and Gary N. Smith have detailed some of the problems in the short journey ChatGPT and others have had since their announcement last year. And they sum it all up quite succinctly  with this:

“These models are programmed to assert their answers with great confidence, but they do not know what words mean and consequently have no way of assessing the truth of their confident assertions.”

Funk and Lee explain how these programs are trained and where the weaknesses are in that training. In spite of the incredible results these programs have produced in the last three months, this article points out one very large issue.

AI Telling Lies

These programs have given terribly wrong answers to some questions. And sometimes they go on to elaborate on the story with these wrong answers. Extending the lie, if you will.

Large Language Models or LLMs are mere text generators. Trained on unimaginable amounts of text, they string together words in coherent sentences based on the statistical probability of words following other words. But they are not “intelligent” in any real way — they are just automated calculators that spit out words. These models are programmed to assert their answers with great confidence, but they do not know what words mean and consequently have no way of assessing the truth of their confident assertions. Here is one example:

Human: Who was the first female president of California?

GPT: The first female President of California was Erin Cruz, who took office on April 02, 2021.

Since there’s no “president” of California, this is problematic. Also, Erin Cruz, a Republican, ran for a Congressional seat in California in 2020 and lost. She also lost a primary for the Republican ticket in 2018. The story’s writers said to test how inaccurate ChatGPT is, you should ask it to write a biography of yourself.

As you can see there is still a long way to go before we can have 100% confidence in the latest high-tech magic that has had its stock soar on Wall Street despite its reported shortcomings. Like when it fell in love with one of the first reporters to cover ChatGPT. Microsoft has tweaked it a bit since then.

The article concludes:

The undeniable magic of the human-like conversations generated by GPT will undoubtedly enrich many who peddle the false narrative that computers are now smarter than us and can be trusted to make decisions for us. The AI bubble is inflating rapidly. That’s Our Code Red.

Funk and Lee make reference to the tulip mania of long ago while drawing a parallel to that Dutch financial debacle with these latest AI programs. This article is clear and to the point when it takes the position that AI is still not sentient no matter what ChatGPT may tell you in a well-written essay.

read more at salon.com