Analyst Explains How ‘Spotter’ AI Could ID Problems within AI Systems
Elon Musk long ago found the power of AI very frightening—for years good reason. The Tesla CEO warned people about the dangers of AI-powered robots, even predicting “scary outcomes” like in “The Terminator.”
The number one question that has been posted since the beginning of algorithms is can we control this thing? And most researchers and scientists say of course we can. But the only way we can understand what an algorithm may be doing is by using another algorithm. You can see the questions being to pile up when talking about the effects or possibilities that AI presents. And there are many questions about whether there can be an ethical AI, or an unethical one.
We found a long, detail-filled article on forbes.com that considers whether AI can be good or bad. Written by Dr. Lance B. Eliot, a Stanford Fellow and a world-renowned expert on AI, he has more than 5.6 million amassed views of his AI columns.
Eliot used the old Mad Magazine cartoon called Spy vs Spy to break down the hows and whys it’s important, but very difficult, to find bad AI. In Spy vs Spy, the cartoon characters devise various dirty tricks and problems for each other. Sometimes these tricks are successful to defeat one spy and sometimes they backfire and the attacker gets taken out.
Eliot reviewed how autonomous vehicles and self-driving algorithms work and the problems of having AI driving people around.
Examples of Potential Autonomous Vehicle Problems
Eliot begins with: AI is not yet sentient. Then he expands on ethical AI and how to fight against a bad algorithm. All told, he asserts that it makes sense to use automation against automation, or in this case, use AI against AI.
Remarkably, we are finding ourselves entering into a real-world version of Spy vs. Spy. An AI system is created to do some useful activities. Another AI system is crafted to watch over that AI. When the overseeing AI detects potentially unethical behaviors, the matter can possibly be quickly dealt with. The AI that is doing the double-checking might halt the offending AI. Or the double-checker might alert humans that the affronting AI is acting up and let the humans decide what to do about it.
We could even devise the overseeing AI to try and force the offending AI to revert back to employing ethical behaviors. Thus, rather than merely blankly stopping the affronting AI, the overseeing AI tries to serve as an ethical coach that steers the AI For Bad into the realm of AI For Good. Be aware that this can be a tricky proposition. It could be that the attempt to steer the offending AI goes poorly and the adverse AI continues unabated. Worse still, the steering might inadvertently push the AI into even worsening unethical territory. As Eliot writes:
“There is no free lunch when it comes to using AI to double-check other AI.”
As a result of the possible errors a double-checking AI might make, Eliot suggests using a “spotter AI.” Such a system would only spot the unethical antics of other AI and raise a red flag. Like a guard dog, “the spotter AI is supposed to start barking loudly and incessantly” to forewarn about ethical breaks. Someone or something else then takes action. Eliot explains:
“I’m guessing that you are potentially convinced that having spotter AI or something akin to it is a wise move and should be straightforward to construct. There is unfortunately a twist that makes this harder than it might seem at first glance. Let’s unpack the twist.”
For Eliot’s extensive and ongoing coverage of Ethical AI, see his analysis at this Forbes link on the problems with ethical AI and this one on potential feuding between humans and AI.
Eliot asks and answers some of his own concerns about AI having oversight. We highly recommend this piece to enlighten anyone interested in the problems AI presents and the miracles it has delivered.
read more at forbes.com
Leave A Comment