Scientists Seek Expanded Definition of AI Singularity Beyond the Turing Test
The evolution of computers, algorithms, and now generative AI has had most of us believing that one day a computer/robot will become conscious—that sooner or later a real Rosie the Robot or a movie android like that in “Ex-Machina” would become a reality. A living, thinking, fully aware being that is more than plot fodder for movies.
The question isn’t when this event takes place, the question is how will we gauge whether the invention is really alive, according to a story in nature.com. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious.”
Beyond Turing
For the last several decades scientists have been using the Turing Test as a method of inquiry for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician, and theoretical biologist.
This test is now widely considered to be outdated and unable to accurately equate what is ” the human equivalent” in thinking from a machine.
Through the decades many more programs have purported to pass the Turing test. Most recently, Google’s AI LaMDA passed the test and even controversially convinced a Google engineer that it was “sentient.”
This article from nature.com reveals a new approach to answering the same question Alan Turing was asking.
To answer this, a group of 19 neuroscientists, philosophers, and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository, ahead of peer review. The authors undertook the effort because:
“It seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California.
You can see the report by clicking here. The six theories include Recurrent Processing Theory, Global Workspace Theory, High Order Theories, Attention Schema Theory, Predictive Processing and Midbrain Theory.
Even Tech Giants Don’t Know
“Nature reached out to two of the major technology firms involved in advancing AI — Microsoft and Google. A spokesperson for Microsoft said that the company’s development of AI is centred on assisting human productivity in a responsible way, rather than replicating human intelligence. What’s clear since the introduction of GPT-4 — the most advanced version of ChatGPT released publicly — ‘is that new methodologies are required to assess the capabilities of these AI models as we explore how to achieve the full potential of AI to benefit society as a whole,’ the spokesperson said. Google did not respond.”
No Consensus
Those who are not trained or deeply involved with AI would most likely be completely convinced these computers are alive and fully aware. The best generative AI like Midjourney or chat AI like the latest ChatGPT-4 are so advanced that a layman could easily be convinced it’s acting like a human.
Many neuroscience-based theories describe the biological basis of consciousness. But there is no consensus on which is the “right” one. To create their framework, the authors therefore used a range of these theories. The idea is that if an AI system functions in a way that matches aspects of many of these theories, then there is a greater likelihood that it is conscious.
read more at nature.com
Leave A Comment