thenextweb.com

Is There a Bigger Secret at Google?

“It feels like a situation where we don’t have all the facts.”

The above statement came from an opinion piece by a man named Tristan Greene that appeared on thenextweb.com.

Greene is a former member of the U.S. armed forces and has some back ground in the participants in this recent rumble that happened at Google over its highly controversial Project Maven. The project’s issues are described as follows by Greene:

Project Maven’s purpose has been a bit obscured in the wave of coverage it’s received. It isn’t a project dedicated to sorting through drone footage — that was just its first mission. It was originally called the Algorithmic Warfare Cross-Functional Team. It’s not a one-shot deal relying on Google’s help, but instead the Mountain View company is part of some early tests to determine how feasible it is for the government to adapt private-sector AI for military purposes. Google’s involvement in Project Maven, which it claims is little more than using TensorFlow to build AI to sort through some old declassified drone footage, remains an enigma. Its defense, that the project is limited to “non-offensive” uses, smacks of the useless “guns don’t kill people” argument. Except the particular worry in this situation is that the military will develop AI that doesn’t need human guidance to kill people, so our concern is that AI does and will kill people. Whether it uses guns, bombs, lasers, or robot kung-fu doesn’t really matter.

That’s worth pausing to consider.

Google, a company whose motto used to be, “Don’t Be Evil,” has had its ethics questioned lately on developing AI for the Pentagon. If you’re among the many people who don’t understand why the Mountain View company would risk such damage to its reputation, you’re not alone. It’s not the money. According to a report from Gizmodo, Google is earning around $9 million. That may sound like a lot, but let’s not forget that Google is worth nearly a trillion dollars. It can afford to skip a project that doesn’t suit its ethical makeup.

Greene goes on to point out that thousands of employees working at Google everyday, do not agree with the project and report being very uncomfortable with the whole idea.

Building AI isn’t the same thing as making a knife. While both can be used for good or bad, you can’t program a knife to kill specific types of people without being wielded by a human. There should be an ethical responsibility on the part of the U.S. government and private-sector companies to regulate the development and use of AI, especially when it comes to warfare.

“Hi Kids! Look! It’s the Algorithmic Warfare Fun Bunch! These are definitely NOT killer robots.” Seriously, this is the actual logo for the project. In Latin: “Our job is to help.” Photo Credit: The Department of Defense

Check out the Greene opinion piece for a fine piece of thinking on the subject.

read more at thenextweb.com.