The European Union is updating consumer protections regarding AI, while the U.S. only offers guidelines.

46-Member EU Group Leads Charge in AI Regulations with More Consumer Oversight

When it comes to protecting consumers from abuse by AI  on each side of the Atlantic Ocean, there is quite a big difference between how The European Union and The United States approach the problem. For instance, the EU has had specific protections in place for a long time and now they are updating that list for consumers. Natasha Lomas has written an informative piece on techcrunch.com separating out some of what the EU has in mind for the near future. A number of countries have joined together to try and protect their citizens from malicious AI.

A recently presented European Union plan to update long-standing product liability rules for the digital age — including addressing the rising use of AI and automation — took some flak from the European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers will be left less well protected from harms caused by AI services than other types of products.

For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liability protections, only last month the U.K.’s data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform “emotional analyses”— urging such tech should not be used for anything other than pure entertainment. While on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law.

And, in recent years, the UN has also warned about the human rights risks of automating public service delivery. Additionally, US courts’ use of black-box AI systems to make sentencing decisions — opaquely baking in bias and discrimination — has been a tech-enabled crime against humanity for years. Yes, in some jurisdictions, AI is being used to review and then possibly recommend prison or probation sentences for human defendants.

The idea of using AI in the justice arena is to save time and clean up some of the court’s over-loaded dockets. However, the question is who decided to have an algorithm assess and then apply the justice that it deems appropriate. Frightening indeed on so many levels.

EU Not Shy about AI Protections

Not long ago we published a story about the AI Act that was eventually voted into place by the EU. The plan includes prohibitions on a small number of use cases that are considered too dangerous to people’s safety or EU citizens’ fundamental rights, such as a China-style social credit scoring system or AI-enabled behavior manipulation techniques that can cause physical or psychological harm. There are also restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions.

The protections the EU is concerned with are more than just having sentences handed down by AI in criminal cases. It is about a person’s credit application being rejected out of hand by an algorithm. How does a person seek redress for that setback? It’s about privacy not being protected and personal information being sold. They have the same type of problems in Europe that we see in America every day.

The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost the uptake of the technology. Senior Commission officials talk about wanting to develop an “excellence ecosystem” that’s aligned with European values.

U.S Fails to Protect Consumers

So far this year the Biden administration has released recommendations for regulating AI. A blueprint for the future but not a law or even an enforceable regulation. Just guidelines.

The Blueprint for an AI Bill of Rights addresses these urgent challenges by laying out five core protections to which everyone in America should be entitled:

Safe and Effective Systems: You should be protected from unsafe or ineffective systems.

Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.

Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

As we said, there are big differences between this side and that side of the pond when it comes to keeping a regulatory eye on AI. Right now the EU side is leaps and bounds ahead of the U.S. side when it comes to protecting consumers and making tech companies aware of what they can be held accountable for.

read more at techcrunch.com