Book Explores Potential Downsides of AI in Global Government, Military

A story in zdnet.com recently made this writer shudder a little. Daphene Leprince-Ringuet’s startling piece describes what happens when AI meets the “big brother” type of government that could use it for terrible reasons and outcomes. It’s not exactly like the Terminator movies where Skynet has destroyed the earth in the future. But it’s close.

The story covers a new book that explores this idea, written by professor of mathematics at Oxford named John Lennox.

Titled 2084: Artificial Intelligence and the Future of Humanity, it suggests a post-Orwellian vision of dystopia, complete with an algorithmic Big Brother and an army of bio-engineered super-humans. And similar predictions have already been made by other influential academics, too. Yuval Noah Harari, in his bestselling book Homo Deus, for example, anticipates that technological developments will lead to humans enhancing themselves with abilities like eternal life.

“If creating an AI that surpasses humans were to happen, of course it would be a threat,” Lennox told ZDNet. “But there are major dangers long before then, and these dangers are actually happening now. I think it is misleading to tell people about the problems that will come in the future – it’s what’s happening now that demands an ethical and moral response.”

Recently, NPR broadcast a story about protests against keeping animals and fish in aquariums and zoos. People say it’s cruel to the animals. In response, scientists created a project already to produce mechanical dolphins with AI minds. The dolphins are almost exact copies of Atlantic Bottle-nosed dolphins in their appearance and how they swim. Which begs the question, will we substitute robots for an understanding of real animals?

Lennox cites two kinds of AI and explains the difference between them in this article. He separates the field of AI into the categories of “general” and “narrow.” He describes general artificial intelligence as the attempt to enhance human beings through add-ons, drugs, bio-engineering – and ultimately, to liberate us from our biological bodies, by uploading our minds onto forever-lasting silicon chips.

In other words, it is not AI itself that will cause trouble, and an apocalyptic take-over of the human species by intelligent robots is not on the cards just yet. An immediate reason to worry, however, has to do with how the technology can be used when it’s ready.

“AI is not immoral,” says Lennox, “it is amoral. It’s what you do with it that can be either moral, immoral, or in some cases, neutral.”

Lennox says that the power structures in place do not favor a responsible use of the technology.

“Certainly, the concentration of power centers that we see developing in the world at the moment indicate that there would be a risk if AI were to fall in the hands of a world government,” says Lennox.

The race to dominate AI is certainly on. China has already announced that AI is a national priority, while Russia is ramping up research for the deployment of military AI. The U.S. has completed the first year of its American Artificial Intelligence initiative, with the goal of consolidating the country’s leadership in the field.

How AI is reshaping our world it a topic of concern, and its impact could be devastating.

read more at zdnet.com