Nature’s Archive contains a variety of content published since the journal’s launch in 1869. This month the Nature publication and its editors tackled AI problems as an issue.

The New AI Systems Are Already Causing Societal Harms

The current state of computing has far surpassed that of the past in breadth and speed. With AI, a student can ask for a text reviewing a chapter in a history assignment and it will be ready in 10 seconds.  These abilities are comfortable to one part of society, but incredibly frightening to others with the historical context of how new inventions can go awry.

The Editors at nature.com have given thought to our recent surge in interacting with AI. They have written an opinion piece that says we need to lighten up a bit while still formulating ways to keep it under control.

The Letter

In November of 2022, ChatGPT attracted a massive group of users worldwide. It reached 100,000,000 in less than 6 months. And people were asking it to many things that were, frankly, ethically and morally questionable. Some were having AI writing term papers for college students or turning in reports at work that were written with the AI program.

It got the attention of the high-tech heavy hitters who composed an open letter to the AI community:

“In March, an open letter signed by Elon Musk and other technologists warned that giant AI systems pose profound risks to humanity. Weeks later, Geoffrey Hinton, a pioneer in developing AI tools, quit his research role at Google, warning of the grave risks posed by the technology. More than 500 business and science leaders, including representatives of OpenAI and Google DeepMind, have put their names to a 23-word statement saying that addressing the risk of human extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

On June 7, the UK government invoked AI’s potential existential danger when announcing it would host the first global AI safety summit this fall.

Level Playing Field

According to nature.com many AI researchers and ethicists are frustrated by the “doomsday” talk around AI.

“It is problematic in at least two ways. First, the specter of AI as an all-powerful machine fuels competition between nations to develop AI so that they can benefit from and control it.

This works to the advantage of tech firms: it encourages investment and weakens arguments for regulating the industry. An actual arms race to produce next-generation AI-powered military technology is already underway, increasing the risk of catastrophic conflict — doomsday, perhaps, but not of the sort much discussed in the dominant ‘AI threatens human extinction’ narrative.”

The second reason is that it allows company executives and technologists to dominate the conversation about AI risks and regulation, while other communities are left out.

And it did feel like the Letter signatories were saying only we understand this AI and if the rest of you would just stop for 6 months that would be great. For them.

Letters written by tech-industry leaders are “essentially drawing boundaries around who counts as an expert in this conversation,” says Amba Kak, director of the AI Now Institute in New York City, which focuses on the social consequences of AI.

Industry Standards Lacking

As the U.S. Congress holds committee meetings with Sam Altman and others, regulations are being weighed on how to protect the public from these powerful technologies.

“Earlier this month, (in June) the European Parliament approved the AI Act, which would regulate AI applications in the European Union according to their potential risk — banning police use of live facial-recognition technology in public spaces, for example. There are further hurdles for the bill to clear before it becomes law in EU member states and there are questions about the lack of detail on how it will be enforced, but it could help to set global standards on AI systems.”

What the Nature writers are saying is we need to work in harmony with this new AI. The fact it is powerful is well understood. Governments and corporations that oversee AI usage must create checks and balances and ensure that it is used ethically.

The bad guys will get to use the same AI as the good guys, but the folks who wrote this editorial feel the good folks will far outweigh the bad in the end.

read more at nature.com