Sam Altman of OpenAI says the company has paused the development of GPT-5 in order to create stronger guardrails for the safety of users and non-users alike. (Source: Adobe Stock)

OpenAI Pauses Development of GPT-5 after Concerns over ChatGPT

If the latest story from techcrunch.com is true, then Sam Altman is a man of his word. In the article Altman, chief executive officer at OpenAI says he has had his company put a pause on developing GPT-5.

You recall that after ChatGPT became public and extremely popular almost overnight it also raise red flags everywhere. And the fear is due to the fact that AI is getting so good it is scaring the people on top of the ladder at tech companies.

“We have a lot of work to do before we start that model,” Altman said at a conference hosted by the India-based newspaper Economic Times. “We’re working on the new ideas that we think we need for it, but we are certainly not close to it to start.”

In late March, more than 1,100 people signed an open letter to call on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Those who signed included Elon Musk and Steve Wozniak.

Weeks later, Altman said that the letter was “missing most technical nuance about where we need the pause,” but asserted that OpenAI had not started training GPT-5 — and didn’t plan to do so for “some time.”

Are Regulations the Answer?

According to Altman, more regulations on smaller AI developers are not needed at this time. Even though he appeared before the U.S. Congress seeking just that.

Earlier in the interview, Altman also said that OpenAI was against regulating smaller AI startups.

“The only regulation we have called for is on ourselves and people bigger,” Altman said.

But on a recent trip to India, Altman is urging lawmakers to put serious thinking into the potential abuse and other downsides of AI proliferation so that guardrails could be put in place to minimize any unintended accidents. It would be hard to find anyone who doesn’t believe the government needs to take a serious look at how fast AI has developed and to regulate it.

read more at techcrunch.com