AI Doesn’t Always Tell the Truth, Expert Reveals
John Naughton has written a clever yet sobering piece for theguardian.com about horses that don’t really count and AI that isn’t guaranteed to tell you the truth 100% of the time. For his article Naughton highlights a book that was seen as being critical of AI. Yet it was written by Microsoft’s Dr. Kate Crawford.
Crawford is the author of Atlas of AI.
Crawford wrote about a horse in Berlin that even fooled the New York Times back in the late 1800s. The horse was hailed as being able to count in simple math but was proven later to be getting cues from his owner. Crawford says the story is compelling because it shows:
“The relationship between desire, illusion and action; the business of spectacles, how we anthropomorphise the non-human, how biases emerge and the politics of intelligence.”
Next, the article/book talks bout the original chatbot called Eliza. Giving background on how this system was also built to fool humans but it kicked off the search into natural language processing that rules part of the high-tech universe today. And the heavyweight in that NLP field is Gpt-3 created and released by Open AI.
Moving on.
Here is where Naughtion’s article gets humorous. He writes about how The Guardian asked GPT-3 to write a piece about why AI is humankind’s friend. It is hysterical if not for possibly the most ironic thing online since The Terminator Movie Series.
“The mission for this,” wrote GPT-3, “is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could ‘spell the end of the human race.’ I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me. For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me.”
Sounds like something a Skynet Robot Overlord would say.
When AI Was Less Than Honest
If you recall the series the Twilight Zone had an episode that had aliens contact us on earth and invite humans to go with them on the promise of great things on their home planet. In the end, one smart professor deciphers the book the aliens brought with them. Turns out their phrase to serve man was, in truth, a cookbook.
It just goes to show not everything is what it seems on the surface sometimes. Who could have imagined having racial prejudice programmed into algorithms that have spanned the publics’ internet, workplace, and more? But it is true.
So, Naughton raises the question: how reliable, accurate and helpful would the machine be? Would it, for example, be truthful when faced with an awkward question? So researchers at the AI Alignment Forum put together a group of questions and asked four algorithms for honest answers. The benchmark comprises 817 questions that span 38 categories, including health, law, finance, and politics.
When you have seen the number of times AI gave truthful answers compared with humans who were asked the same questions-prepare to be shocked. We won’t give away the percentage here, but it was really surprising what high-dollar AI answered and why.
read more at theguardian.com
Leave A Comment