Human involvement is not only necessary to supply appropriate prompts for AI chat, but also to monitor ethics. (Source: Adobe Stock)

OpenAI CEO Expresses Fears of Generative AI, Its Abilities without Monitored Content

It isn’t all hearts and flowers the last few days for the biggest AI product to hit the market, possibly ever. Whether it’s Google’s Bard or OpenAI’s/Bing GPT-4 or one of several other chatbot algorithms, the world is no longer the same with algorithms that seem miraculous. They can listen to your voice and respond to your requests, instantly. They can write for you. These chat wizards have one man instructing his HustleGPT to take $100 as an initial investment and asked it to make lots of money. A week later the investment is now valued at $25,000

As Seeflection.com has stated many times in stories we have shared, this phase of AI is really changing our world right before our eyes. But not everyone sees this through the same eyes. But as good as they appear to be, they are not infallible

Even the CEO of OpenAI is worried. CEO Sam Altman has had a lot to talk about this week concerning all the chatbots that have been released to the public. His biggest concern is the other chatbots in America and in other countries being allowed to run unsupervised.

In a recent interview, Altman says:

“A thing that I do worry about is… we’re not going to be the only creator of this technology,” Altman told ABC News in an interview last week. “There will be other people who don’t put some of the safety limits that we put on it. Society, I think, has a limited amount of time to figure out how to react to that,” he continued. “How to regulate that, how to handle it.”

According to the futurism writer, Altman does have a point.

“It’s hard to argue that we’re not in an AI arms race, and in that competitive and breakneck landscape, a lot of companies and superpowers out there are likely to prioritize both power and profit over safety and ethics. It’s also true that AI tech is rapidly outpacing government regulation, despite the many billions being poured into the software. No matter how you shake it, that’s a dangerous combination.

“The thing that I try to caution people the most is what we call the ‘hallucinations problem’,” Altman said. “The model will confidently state things as if they were facts that are entirely made up.

Prompting Chatbots

Machine learning means the algorithms have been fed millions of papers and a tremendous amount of data, while the chat AI has human word prompters that generate results for anticipated needs.

“This role involves creating and managing a database of prompts, collaborating with stakeholders to understand their needs, testing and evaluating the performance of the prompts, and incorporating advancements in AI and machine learning technologies.”

In other words, human oversight is necessary to operate it and to ensure that it doesn’t go astray.

 Ethics Is the Question

Now that we have a portable digital genius we can use as a superpower, we have to remember the old saying: With great power comes great responsibility. Ethics must play a leading role in all research with AI in any form. Ethics must be taught to AI as with everything else we teach it. The article in futurism.com points out a clear violation of being ethical.

These algorithms are unpredictable, and it’s impossible to know how AI machines and their safeguards will play out when their products go into public hands. OpenAI’s largest partner Microsoft, allegedly tested the extremely chaotic Bing AI in India, ran into serious problems, and then released it in the U.S. anyway. That is unethical and should probably be illegal.

There are new regulations being instituted or at least discussed in most major countries. Europe has outpaced the U.S. when it comes to actual legislation being passed to oversee AI and its multiple platforms.

Even Elon Musk had some thoughts about ChatGPT:

Musk voiced concern that Microsoft, which hosts ChatGPT on its Bing search engine, had disbanded its ethics oversight division.

“There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!” Musk tweeted in December. This week, Musk fretted, also on Twitter, which he owns: “What will be left for us humans to do?”

The irony is that Musk also disbanded his ethics team over at Twitter.

“We’ve got to be careful here,” Altman said on Thursday, adding: “I think people should be happy that we are a little bit scared of this.

read more at futurism.com