European Board to Lead the Way on AI Ethics Regulations
Last year the European Union implemented the most stringent internet privacy guidelines in the world, the GDPR (General Data Protection Regulation) to codify how companies use, manage and secure data. This year, the EU is tackling AI programming.
A story in TheVerge.com outlined the basics that EU wants to require of AI development, drafted by 52 experts in the tech field. Seven rules will be central to the AI guidelines, which will be put to the test this summer with the involvement of several large technology companies:
- Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.
- Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.
- Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.
- Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.
- Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
- Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
- Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance.
According to a CNN story, European regulators have learned from social media companies’ ills and abuses that they need to be proactive in preventing AI from committing ethical violations.
On Monday, April 8, Great Britain proposed new rules that would make internet companies legally responsible for eliminating harmful content from their platforms. A regulator told CNN:
“It’s like putting the foundations in before you build a house…now is the time to do it,” said Liam Benham, the vice president for regulatory affairs in Europe at IBM (IBM), which was involved in drafting the AI guidelines.
A VentureBeat.com story explains that the “European AI Alliance” will recruit companies to participate in the pilot program, in which they will provide feedback on how the rules would work and how guidelines might need to be altered.
Mariya Gabriel, Europe’s top official on the digital economy, told CNN companies using AI systems should be transparent with the public.
“People need to be informed when they are in contact with an algorithm and not another human being,” said Gabriel. “Any decision made by an algorithm must be verifiable and explained.”
According to a story in TechRadar.com, the EU hopes to gain a global “trustworthiness” competitive advantage for European companies by setting the rules for other AI companies to follow as it works toward building an AI alliance throughout Europe.