‘AI Researchers: Algorithm Says Existential Catastrophe Is Not Just Possible, But Likely’
Is it already too late? According to some researchers at the University of Oxford, it may be. By too late we mean the existential threat that AI poses to mankind and our continued dominance of planet Earth.
Researchers at Google Deepmind and the University of Oxford have concluded that it’s now “likely” that superintelligent AI will spell the end of humanity — a grim scenario that more and more researchers are starting to predict.
In a recent paper published in the journal AI Magazine, the team — comprised of DeepMind senior scientist Marcus Hutter and Oxford researchers Michael Cohen and Michael Osborne — argues that machines will eventually become incentivized to break the rules their creators set to compete for limited resources or energy.
Remember this Elon Musk quote?
“If AI has a goal and humanity just happens to be in the way, it will destroy humanity …Feb. 29, 2020.
Or maybe this one.
“Robots will be able to do everything better than us,” Musk said during his speech. “I have exposure to the most cutting edge AI, and I think people should be really concerned by it.”Aug 24, 2021
In their paper, the researchers argue that humanity could face its doom in the form of super-advanced “misaligned agents” that perceives humankind as standing in the way of a reward.
“One good way for an agent to maintain long-term control of its reward is to eliminate potential threats, and use all available energy to secure its computer,” the paper reads.
“Losing this game would be fatal,” the researchers wrote.
This paper is causing some AI researchers to caution that we need to take our time and be extra careful with the things we ask of AI. It will “likely” produce it.
read more at futurism.com