Watchdog Groups Identify Problems with Algorithms Used to Govern the Public
At first, the story we found at wired.com looked interesting because it lays out the use of algorithms to help run some American and European cities. The thing that stood out in this Khari Johnson article is not just the use of AI to run departments in some cities, but what the algorithms are doing is a bit concerning for many people.
A group of AI watchdogs called EPIC is raising the alarm about the lack of public knowledge about how much these algorithms affect their lives. In a recent report, the Electronic Privacy Information Center (EPIC) is the nonprofit that spent 14 months investigating Washington D.C.’s use of algorithms and found they were used across 20 agencies, with more than a third deployed in policing or criminal justice.
Yes, the algorithms are dispensing justice in some areas. And at times it isn’t always done well.
EPIC reported a few cases where the algorithm got it wrong. Government agencies often turn to automation in hopes of adding efficiency or objectivity to bureaucratic processes, but it’s often difficult for citizens to know they are at work, and some systems have been found to discriminate and lead to decisions that ruin human lives. In Michigan, an unemployment-fraud detection algorithm with a 93 percent error rate caused 40,000 false fraud allegations. A 2020 analysis by Stanford University and New York University found that nearly half of the federal agencies are using some form of automated decision-making system.
EPIC dug deep into one city’s use of algorithms to give a sense of the many ways they can influence citizens’ lives and encourage people in other places to undertake similar exercises. Ben Winters, who leads the nonprofit’s work on AI and human rights, says Washington was chosen in part because roughly half the city’s residents identify as Black.
“More often than not, automated decisionmaking systems have disproportionate impacts on Black communities,” Winters says.
Last month, lawmakers in Pennsylvania, where a screening algorithm had accused low-income parents of neglect, proposed an algorithm registry law. And since the Biden Administration has only promoted ‘guidelines’ for the use of AI, it may be a while before any federal oversight can be expected.
Keeping Quiet about AI Usage
And for the most part, EPIC found many cities were less than forthcoming about their use of algorithms. The agencies were unwilling to share information about their systems, citing trade secrecy and confidentiality. That made it nearly impossible to identify every algorithm used in DC. Earlier this year, a Yale Law School project made a similar attempt to count algorithms used by state agencies in Connecticut but was also hampered by claims of trade secrecy.
EPIC says governments can help citizens understand their use of algorithms by requiring disclosure anytime a system makes an important decision about a person’s life. This article goes deeper on this subject by looking at some problems that came up in Europe as they were trying to get a better understanding of how many algorithms were in use and where.
Roughly two years ago the cities of Amsterdam and Helsinki announced plans to make comprehensive lists of their municipal algorithms, as well as the data sets used to train them and the city employees responsible. The idea was to help citizens seek redress from a human if they felt a system had problems.
But to date, Helsinki’s AI register largely serves as marketing for a set of city services chatbots. The Amsterdam Algorithm Register currently lists only six systems, including detecting illegal vacation rentals, automated parking control, and an algorithm used for reporting issues to the city. Together the two cities list a total of 10 automated decision-making systems, even though a document released by Amsterdam and Helsinki officials says they jointly had more than 30 AI projects underway in late 2020.
The idea of algorithms being used in running our municipalities is sound and will only grow year by year around the world. However, the public must be informed about the use of these AI-driven systems because it impacts them directly.
read more at wired.com