Artificial neural networks, advanced by pioneer Geoffrey Hinton, are the basis of generative AI, as witnessed in ChatGPT and GPT-4. Hinton thinks advances are happening too fast for regulations to keep up. (Source: Adobe Stock)

Geoffrey Hinton Throws in Towel with AI, Warning of Future Dangers without Controls

One of the most influential advocates of AI, a man who built the foundation of the AI systems in use now, says he can no longer continue in the field until limits are put on products created through generative AI, according to a story by Cade Metz of The New York Times. Metz interviewed Dr. Geoffrey Hinton in Toronto where he lives.

Hinton, winner of the Turing Award in 2018 with two colleagues who were former students, announced yesterday that he had quit Google, stating that he regrets his life’s work, according to the article.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

ChatGPT and other products that draw from generative AI are already being used for misinformation, will destroy many jobs, and will help bad actors do bad things. Hinton in 2012 focused on creating neural networks, which led to the basis of current AI operations. The company he formed with two students was acquired by Google for $44 million.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

Google, OpenAI, Microsoft’s Bing and other companies are forging ahead with the generative AI technology, in spite of petitions by two different tech organizations seeking to halt development for six months until threats can be identified and prevented. Hinton worries about the Defense Department, for instance, creating killer robots and inventing other dangerous uses with the technology. He warned of “unexpected behavior” from the data used to train AI.

read more at nytimes.com