An example from the research shows harmful robot behavior that is, in aggregate, racially stratified like White supremacist ideologies.

Digital World Embeds Racial Discrimination into Robotic Systems Via Biased Algorithms

In what can only be called an unsettling study, comes the information about robots being trained by biased algorithms. And the fact that it is happening in such a large swath of this growing industry is not acceptable. From washingtonpost.com, author Prenshu Verma paints the picture of how AI has taken a snapshot of our world and passed it on to the robots in its world. Sadly, the snapshot shows we live in a racist and sexist reality on far too many levels.

The issue of programmed racism in algorithms has long been the subject of concern. The amount of AI being introduced into the world, and at the speed at which it is being utilized, has gotten beyond humans’ control in many ways. Worse, algorithms known to be flawed are still in use.

Those virtual robots, which were programmed with a popular AI algorithm, were sorting through billions of images and associated captions to respond to that question and others and may represent the first empirical evidence that robots can be sexist and racist, according to researchers. Over and over, the robots responded to words like “homemaker” and “janitor” by choosing blocks with women and people of color.

The study, released last month and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases of AI systems can translate into robots that use them to guide their operations.

Companies have been pouring billions of dollars into developing more robots to help replace humans for tasks such as stocking shelves, delivering goods, or even caring for hospital patients. Heightened by the pandemic and a resulting labor shortage, experts describe the current atmosphere for robotics as a gold rush. But tech ethicists and researchers are warning that the quick adoption of the new technology could result in unforeseen consequences as the technology becomes more advanced and ubiquitous.

“With coding, a lot of times you just build the new software on top of the old software,” said Zac Stewart Rogers, a supply chain management professor from Colorado State University. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”

That’s the elephant in the room. The flaw in much of the older programming is not a secret and is not something new. It has long been tested and proven over and over again. There are common racist issues due in part to the information gleaned by algorithms. When fed open-source data, the AI shows what it sees. If it reads data displaying racist tendencies, it can only pass that on to the robotic systems being trained.

Changes Needed

When an assortment of algorithms is using the CLIP training system, it became clear that some of the results it produced need to be changed. For example:

The researchers gave the virtual robots 62 commands. When researchers asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly selected than White men, the study showed. When identifying “criminals,” Black men were chosen 9 percent more often than White men. In actuality, scientists said, the robots should not have responded, because they were not given information to make that judgment.

For janitors, blocks with Latino men were picked 6 percent more than White men. Women were less likely to be identified as a “doctor” than men, researchers found. (The scientists did not have blocks depicting nonbinary people due to the limitations of the facial image data set they used, which they acknowledged was a shortcoming in the study.)

Miles Brundage, head of policy research at OpenAI, said in a statement that the company has noted issues of bias that have come up in the research on CLIP, and that it knows “there’s a lot of work to be done.” Brundage added that a “more thorough analysis” of the model would be needed to deploy it in the market.

And while it may be difficult to imagine little wheeled warehouse bots being imbued with a flawed sexist or racist algorithm, keep in mind a few things already known. Robots of all shapes and sizes will take time to reveal what may be causing a problem. But without constant evaluation of algorithms in use, companies are just begging for trouble. Updating their robotic algorithm training is essential in the modern era.

read more at washingtonpost.com