ChatGPT remains a controversial technology because of AI’s potential for abuse. Some say GPT-4 and other advanced AI should be paused for six months while new rules are created. The image is from an electronic screen introducing ChatGPT in China. (Source: Adobe Stock)

Researcher Seeks Middle Ground as AI Community Remains Divided over General AI

In a story for wired.com this week, Dr. Sasha Luccioni, a Researcher and Climate Lead at Hugging Face, discusses her take on advanced AI like ChatGPT and GPT-4  and what exactly it means to us as humans.

Last week Seeflection.com carried the story about the number of AI experts calling for a 6-month moratorium on AI chatbot development. Some heavy hitters in tech signed this letter—like Elon Musk. But on the other side of this issue, Bill Gates disagrees with the letter.

Luccioni who studies the ethical and societal impacts of AI models and datasets, is also a Director of Women in Machine Learning (WiML), a founding member of Climate Change AI (CCAI), and Chair of the NeurIPS Code of Ethics committee. The first sentence of the last paragraph in Luccioni’s story might actually be the middle ground both sides are looking for.

“The recent open letter presents as a fact that superhuman AI is a done deal. But in reality, current AI systems are simply stochastic parrots built using data from underpaid workers and wrapped in elaborate engineering that provides the semblance of intelligence.”

In other words, these chatbots simply look for word patterns they have been trained on. They aren’t creating, they are imitating for now.

Lucioni says the call for the pause in AI is simply not possible and she explains why.

While the training of large language models by for-profit companies gets most of the attention, it is far from the only type of AI work taking place. In fact, AI research and practice are happening in companies, academia, and Kaggle competitions worldwide on a multitude of topics ranging from efficiency to safety. This means that there is no magic button that anyone can press that would halt ‘dangerous’ AI research while allowing only the ‘safe’ kind. And the risks of AI which are named in the letter are all hypothetical, based on a longtermist mindset that tends to overlook real problems like algorithmic discrimination and predictive policing, which are harming individuals now, in favor of potential existential risks to humanity.

And the upside to this particular AI is also part of the downside. It’s controlled by the major corporations that built them.

Luccioni calls for transparency and legislative oversight of AI. Instead of focusing on ways that AI may fail in the future, we should focus on clearly defining what constitutes an AI success in the present.

This path is eminently clear: Instead of halting research, we need to improve transparency and accountability while developing guidelines around the deployment of AI systems. Policy, research, and user-led initiatives along these lines have existed for decades in different sectors, and there are already concrete proposals to work with to address the present risks of AI.

This is a well-reasoned article that explores the impact of AI and how personal data is being used and the potential to erase livelihoods.

read more at wired.com