Global Group Advocates AI Protections for Future

A privacy group called Electronic Privacy Information Center (EPIC) has called on the U.S. government to adopt a range of guidelines to protect the public from AI through the National Science Foundation. Twelve universal guidelines revealed at a meeting in Brussels last week are meant to “inform and improve the design and use of AI” by maximizing the benefits while reducing the risks.

For years, AI has been applied as a general term for machine-based decision making, but as the technology improves and spreads, AI’s impact on human lives has been magnified—from gaining credit, employment and even to criminal sentencing.

The 12 guidelines are designed to be built into AI to ensure the protection of human rights. That includes a right to know the factors, logic and the techniques used to the outcome of a decision; a fairness obligation that removes discriminatory decision making; and an obligation to secure systems against cybersecurity threats. The principles also include a prohibition on unitary scoring—to prevent governments from using AI to score their citizens and residents—an obvious reference to China’s controversial social credit system.

“By investing in AI systems that strive to meet the [universal] principles, NSF can promote the development of systems that are accurate, transparent, and accountable from the outset,” wrote Marc Rotenberg, EPIC’s president and executive director. “Ethically developed, implemented, and maintained AI systems can and should cost more than systems that are not, and therefore merit investment and research.”

The guidelines are as follows:

  • Right to Human Determination. All individuals have the right to a final determination made by a person.
    Identification Obligation. The institution responsible for an AI system must be made known to the public.
  • Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
  • Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.
  • Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.
  • Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms.
  • Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls.
  • Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.
  • Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.
  • Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.
  • Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.

Writer Zack Whittaker details how more than 200 experts and 50 organizations have signed on to the guidelines—including the American Association for the Advancement of Science and the Government Accountability Project.