Toews Says It’s Time for Agency to Regulate Powerful AI Tech

Citing historical examples of new regulatory agencies created to monitor meat production (FDA), the stock market (SEC) and pollution (EPA), AI columnist Rob Toews issued a call for an agency to monitor AI in the United States in a column on June 28.

The column follows several stories of issues with facial recognition and other technologies with broad implications involving privacy that have yet to be thoroughly vetted by any arm of the U.S. government.

“Across the AI community, there is growing consensus that regulatory action of some sort is essential as AI’s impact spreads. From deepfakes to facial recognition, from autonomous vehicles to algorithmic bias, AI presents a large and growing number of issues that the private sector alone cannot resolve.”

Waymo’s unregulated cars in Phoenix led to the death of a pedestrian that could have been prevented. While piecemeal actions by states have been taken in the wake of that event, no ongoing effort is being made by the federal government to ensure test cars are safe on the road.

But Toews points out that even tech company leaders realize that if the federal government fails to monitor the situation, it will reflect badly on all of them when something goes wrong, as it is bound to do because companies are pushing the boundaries of ethics and safety constantly.

In the words of Alphabet CEO Sundar Pichai: “There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.”

Toews explains that government agencies are necessary to have the research, analysts and expertise to handle complex issues in business, citing the pharmaceutical industry as a prime example of one that requires constant oversight to protect the public from drugs that have not been properly reviewed. AI, too, requires experts who have the specialized insight to weigh new technology and decide whether it’s ready for prime time.

Toews, however, cites experts who say regulation should be narrow and fit specific technology, instead of being applied broadly.

Stanford University’s One Hundred Year Study on AI made this point well: “Attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains. Instead, policymakers should recognize that to varying degrees and over time, various industries will need distinct, appropriate, regulations that touch on software built using AI or incorporating AI in some way.”

The agency would work with existing ones where responsibilities overlap, Toews writes.

For instance, in crafting policies about the admissible uses of machine learning algorithms in criminal sentencing and parole decisions, the agency would collaborate closely with the Department of Justice, lending its subject matter expertise to ensure that the regulations are realistically designed.

The agency would need to monitor other problematic tech, too, he notes:

There are numerous additional areas in which smart, well-designed AI policy is already needed: autonomous weapons, facial recognition, social media content curation, and adversarial attacks on neural networks, to name just a few.

Toews fails to mention how the EU is already tackling several issues with its privacy rules via the General Data Protection Regulation (GDPR) and how it is working with 25 countries to offer a “trustworthy AI” designation and to monitor facial recognition and AI development. While American tech companies bristle at knuckling under to such regulations for market share, in the long run they may be forced to do the same in the United States, once its leaders decide to stop letting companies use its citizens as guinea pigs for new, sometimes dangerous technologies.

Already Congress is considering legislation to regulate facial recognition, which has been misused in law enforcement, but it’s a piecemeal action that fails to address so many other issues that loom in the tech arena.