Some Critics Say New U.S. AI Rights Guidelines Don’t Go Far Enough, But Are a Start
What’s been called the Golden Decade of AI has also been called the Wild West of AI. That is to say, there has been an enormous growth of AI in all facets of society, but there are very few rules or legislative oversite of AI and what people do with it. For example, back in 2019, some border authorities in Europe had a device they said was a facial lie detector. The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. It even had 20 years of research behind it. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didn’t work, and the project’s own website acknowledged that the technology “may imply risks for fundamental human rights.”
In the European Union, it seems they take this type of failure with AI very seriously. They went so far as to pass an AI Act. And they are considering hundreds of amendments to it this year according to an article we found at edsurge.com this week.
The United States has approached this AI problem differently. With guidelines. Or suggestions from the current administration.
Published last week, the Biden Administration’s “Blueprint for an AI Bill of Rights,” a non-binding set of principles meant to safeguard privacy, included a provision for data privacy and notes education as one of the key areas involved.
The blueprint was immediately characterized as broadly “toothless” in the fight to mend Big Tech and the private sector’s ways, with the tech writer Khari Johnson arguing that the blueprint has less bite than similar European legislation while noticing that the blueprint doesn’t mention the possibility of banning some AI. Instead, Johnson noted, the blueprint is most likely to course-correct the federal government’s relationship to machine learning.
Small Steps That Won’t Upset Big Tech
Of course, the main concern with this “blueprint” is not to go too far with government oversight on AI technology that is remaking our world, and the companies that produce said tech. Nobody wants to kill the golden goose. But there is more to it than just the economy. It is the next generation of AI users we are turning this technology over to, and whether they understand there must be some limitations to something that is capable of ending mankind.
The new guidelines also go into the American education system’s use of AI. Or lack of use in many areas.
It’s unclear how the blueprint will be used by the Department of Education, says Jason Kelley, an associate director of digital strategy for the Electronic Frontier Foundation.
Education is one of the areas specifically mentioned in the bill, but observers have noted that the timeline for the Department of Education is relatively sluggish. For example Guidance on using AI for teaching and learning is slated for 2023, later than deadlines for other government agencies.
And whatever guidelines emerge won’t be a panacea for the education system. But that the government recognizes that students’ rights are being violated by machine learning tools is a “great step forward,” Kelley wrote in an email to EdSurge.
Student privacy is a top concern across the country these days. In particular, many politicians, including Kari Lake, candidate for Arizona Governor, are asking for cameras to be placed in the classrooms. But in a pushback of that idea, a recent Center for Democracy in Technology study found that schools more often use surveillance systems to punish students than to protect them. The technology, while intended to prevent school shootings or alert authorities to self-harm risks, can harm vulnerable students, like LGBTQ+ students, the most, the study noted.
The approach taken in Europe and in America may be different when it comes to reigning in the wild west in AI, but it is clear the Biden Administration at least wants to get the conversation started regarding the best ways to maintain some guardrails around students and the public at large when it comes to the power of AI.
read more at edsurge.com