Skynet logo from the Terminator movie series.

New Theories on Super Smart AI Poses Question: Will We Write Our Own Ending?

In our quest to build the fastest, smartest, AI possible, researchers have asked what happens when we build an AI machine that is faster and smarter than the people that build it or try to control it? Two recent articles cover this conundrum. The first from sciencealert.com, written by David Neild.

The idea of artificial intelligence overthrowing humankind has been talked about for many decades, and scientists have just delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not.

The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze. But if we’re unable to comprehend it, it’s impossible to create such a simulation.

Rules such as ’cause no harm to humans’ can’t be set if we don’t understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

“A super-intelligence poses a fundamentally different problem than those typically studied under the banner of ‘robot ethics’,” write the researchers. “This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.”

Part of the team’s reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centers on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one.

As Turing proved through some smart math that while we can know that for some specific programs, it’s logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory at once.

Any program written to stop AI from harming humans and destroying the world, for example, may reach a conclusion (and halt) or not – it’s mathematically impossible for us to be absolutely sure either way, which means it’s not containable.

“In effect, this makes the containment algorithm unusable,” says computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.

Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when super-intelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a super-intelligent AI. The study was published in the Journal of Artificial Intelligence Research.

“Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently,” the designdevelopmenttoday.com story asks. “Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?”

We have linked both articles for you below.

read more at sciencealert.com

read more at designdevelopmenttoday.com