FTC Asks Public, Experts for Input on Regulating AI Data Mining
A mountain of evidence supports the use of AI in nearly every field of study or profession. We have also seen the dark side and downside of how AI has been used by unscrupulous companies and individuals. Since the beginning of the internet, the goal has been to collect personal information that can be used to make money.
How you shop, where you go, and how you spend your free time—all that personal data can be gathered and used years later for or against you. The agency charged with keeping your data safe is the Federal Trade Commission.
Khari Johnson, a writer at wired.com, focused on how to keep your data safe from the bad guys in a recent article. Johnson points out that teenagers can be going online with their friends and dive into Facebook another social media platform. Maybe the kids have posted things that involve dirty language or suggestive pictures. Then years later, all this youthful expression can keep them from getting a job, for instance.
Hye Jung Han, the author of Human Rights Watch, reports on how education companies are selling personal information to data brokers. She wants a ban on personal data-fueled advertising to children.
“Commercial interests and surveillance should never override a child’s best interests or their fundamental rights, because children are priceless, not products,” she said.
Han and Fitzgerald were among about 80 people who spoke at the first public forum run by the FTC to discuss whether it should adopt new rules to regulate personal data collection, and the AI that’s fueled by that data.
The FTC is seeking the public’s help to answer questions about how to regulate commercial surveillance and AI. Among those questions is whether to extend the definition of discrimination beyond traditional measures like race, gender, or disability to include teenagers, rural communities, homeless people, or people who speak English as a second language.
Data Oversite Requested
We have all seen deep fake videos that are good enough to fool anyone but especially young people. And there are a number of other ways the bad guys use to make money off of data. As an FTC document that proposes new rules puts it, that business model is:
“creating new forms and mechanisms of discrimination.”
Last month, the Federal Trade Commission voted 3-2 along party lines in favor of adopting an Advanced Notice of Proposed Rulemaking (ANPR) to consider drafting new rules to address unfair or deceptive forms of data collection or AI. What’s unclear is where they will draw the line.
The FTC is taking some data brokers to court. Most recently, Kochava, a company selling location data from places like abortion clinics and domestic violence survivor shelters, is suing the FTC. New rules can address systemic problems and show businesses the kind of conduct that can lead to fines or land them in court.
The rulemaking process is the start of AI regulation by the commission, long following its hiring of staff for that purpose a year ago. Any attempt to create new rules would require proving that an unfair or deceptive business practice is prevalent and meets a legal threshold for the definition of unfair. The FTC will accept public comments about commercial surveillance and AI ANPR until October 21, 2022.
This is a great opportunity to help create improved security for everyone’s data.
read more at wired.com
Leave A Comment