OpenAI’s GPT-4, which is more powerful and harder to trick with questions than earlier versions, prompted tech leaders to urge caution when it’s used with ChatGPT. (Source: OpenAI)

Open Letter to Close AI Chatbot Training Gets Support from Hundreds of High-Profile AI Leaders

Hundreds of tech leaders like Elon Musk, Steve Wozniak and Andrew Yang and those in academia recently added their names to a petition-like letter on the Future of Life website advocating the halt of general AI research through ChatGPT and other GPT-4-based services so that safety protocols can be developed that mitigate risks.

According to the letter on, the potential harms are vast:

“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.”

The letter doesn’t say to stop all use of AI—that would have enormous negative consequences for companies with AI products. However, it does say that it should halt, “the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

A story in Wired magazine outlined the issue, saying that Microsoft and Google did not respond to requests for comment on the letter.

“The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google,” the story says. “Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.”

The development itself is not the main source of the concern, it’s the speed of major companies who have practically unlimited resources to plow into an AI race, according to Wired.

“The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Although Google developed some of the AI needed to build GPT-4, and previously created powerful language models of its own, until this year it chose not to release them due to ethical concerns.”

The story gives the timeline: OpenAI announced its first large language model, GPT-2 in February 2019. Its successor, GPT-3, was unveiled in June 2020. ChatGPT, which introduced enhancements on top of GPT-3, was released in November 2022.  GPT-4, which was released this month, sparked the backlash.

Marc Rotenberg, founder and director of the Center for AI and Digital Policy, said on his website that his organization plans to file a complaint this week with the U.S. Federal Trade Commission calling for it to investigate OpenAI and ChatGPT and ban upgrades to the technology until “appropriate safeguards” are in place