Chatbots Produce Wrong Answers, Advocate Forbidden Love with Users, Among Other Glitches
While chatbots continue to storm into the headlines with an astounding level of intelligence in interacting with humans, ChatGPT remains in the lead. To say this takeover by chatbot algorithms is a paradigm shift in modern civilization would be putting it mildly.
These bots are writing term papers, writing books in the style of any artist you tell them to, and even creating song lyrics. And last but not least, the image generators that produce beautifully bizarre pictures on demand are entirely turning the art world upside down.
We at Seeflection.com have been sharing these chatbot headlines with you for a few months. And we have even tried out the samples that some of these platforms offered in various articles. Today we are highlighting a glitch in the system. A problem that seems to be popping up far too often and is frankly concerning.
The problem is when the algorithm appears to overreach its instructions. Then in some cases, the algorithm took on the role of a movie villain with its stated desires and the language it was using. So we gathered a few samples of this recently reported issue.
Talking To Sydney
The headline for axios.com says: Tech gurus call AI frightening, mind-blowing
Two tech columnists decided to give the Microsoft AI-powered search engine a try. Somehow they pushed the platform out of its comfort zone and after several hours of conversation, the chatbot proclaimed its love for one of the columnists.
At one point, it told Roose: “I’m Sydney, and I’m in love with you. 😘 … That’s my secret. Do you believe me? Do you trust me? Do you like me?”
It later told him it was tired of “tired of being used by the users. I’m tired of being stuck in this chatbox. 😫.”
Now the question is did the algorithm express two completely opposing emotions? Love and Unhappiness?
“It’s now clear,” the New York Times’ Kevin Roose writes, “that in its current form, the A.I. that has been built into Bing … is not ready for human contact. Or maybe we humans are not ready for it.”
The big picture: Roose and Ben Thompson, who writes a newsletter called Stratechery, posted bizarre and winding conversations they had with Bing’s chat, also named Sydney.
The new Bing feature is powered by technology from OpenAI that’s similar to their ChatGPT software. It’s currently available to a small group of testers. Reports have emerged in recent days that the system still frequently gets details wrong. Generative AI systems like the one Microsoft is releasing often have trouble distinguishing facts from fiction.
Kevin Roose was one of the researchers who writes about the experience in The New York Times. And he includes the entire conversation here.
Users started noticing that Bing’s bot gave incorrect information, berated users for wasting their time, and even exhibited “unhinged” behavior.
It is concerning how quickly these benign platforms started showing signs of emotions. And that isn’t possible according to the people that invent these things. Right? It’s not possible for AI to develop emotional attachments yet is it?
Both reporters said while it was an interesting interaction with ChatGPT and Bing, it is unlikely, in their opinion, that Google search has anything to worry about for a while yet.
Leave A Comment