AI holds inordinate power over people’s lives when it is not regulated. (Source: Adobe Stock)

Report: The Formidable Power of Algorithms Needs Regulation Like New York’s

Most of us are somewhat aware of just how much algorithms are in touch with if not in control of our lives. Our day-to-day routines in some civilized countries are guided and decided by programs that are making more and more decisions over our lives. This concept was well stated in the first paragraph of a piece written by Khari Johnson for wired.com and in arstechnica.com.

The WIRED story explains the misuse of AI on many levels. Johnson reports steps being taken to correct some overreaches by biased algorithms.

“Algorithms play a growing role in our lives, even as their flaws are becoming more apparent: a Michigan man wrongly accused of fraud had to file for bankruptcy; automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment; Black Facebook users were subjected to more abuse than white users. Other automated systems have improperly rated teachers, graded students, and flagged people with dark skin more often for cheating on tests.”

Those listed above are just a few of these issues that AI is allegedly responsible for. There has been some attempt at oversight of AI. An AI Bill of Rights if you will. But the problem of trying to rein in AI boundaries is nationwide and worldwide. There has been proportionately very little done by way of legislation.

There have been municipalities in California that limit the use of facial recognition in certain instances.  And even in areas around Boston, AI has seen limited use allowed in some departments of law enforcement. Now New York City has limited the use of AI in how a person is hired or evaluated for any job.

European Union lawmakers are considering legislation requiring inspection of AI deemed high-risk and creating a public registry of high-risk systems. Countries including China, Canada, Germany, and the UK have also taken some steps to regulate AI in recent years.

Slanted Algorithms Impact Varied Areas

Algorithms are used to decide sentences in court cases. They decide if you get a loan for a new truck. Did you know they will decide the type of health care you receive or how your operation is accomplished? But many of these algorithms will be more in your favor if you have white skin, or a better job, or the right education in your background. Or a big one is an algorithm that prefers men over women in some scenarios.

A revamped version of the Algorithmic Accountability Act, first introduced in 2019, is now being discussed in Congress. According to a draft version of the legislation reviewed by WIRED, the bill would require businesses that use automated decision-making systems in areas such as health care, housing, employment, or education to carry out impact assessments and regularly report results to the FTC. A spokesperson for Senator Ron Wyden (D-Ore.), a cosponsor of the bill, says it calls on the FTC to create a public repository of automated decision-making systems and aims to establish an assessment process to enable future regulation by Congress or agencies like the FTC. The draft asks the FTC to decide what should be included in impact assessments and summary reports.

Johnson’s piece goes into detail about some of the movements being seen across industries and governments alike. This movement is one that hopefully makes algorithms more friendly overall, and fairer in every case, it is asked to scrutinize.

The many reports this year regarding AI and its oversight include those from Cornell University and Microsoft. These researched subjects are also recommending better oversight all across the board while it’s still possible.

Julia Stoyanovich, an associate professor at New York University who served on the New York City Automated Decision Systems Task Force states,

“I really believe that this cannot be a space where all the decisions and fixing comes from a handful of expert entities,” she says. “There needs to be a public movement here. Unless the public applies pressure, we won’t be able to regulate this in any way that’s meaningful, and business interests will always prevail.”

If you see something in your algorithms, say something. It will help the next person and it will improve the range and use of AI.

read more at arstechnica.com