Privacy Concerns Fuel Debate
Online privacy surged as a topic of intense international interest recently in light of the scandal over exploitation of Facebook user information. While user data is already regulated in some countries, the European Union passed a new privacy law that will go into effect this May—one that was pending for two years, but seems eerily timely following the Facebook breach.
As of March 27, Mark Zuckerberg, CEO of Facebook, had refused to testify in front of a United Kingdom Parliament committee over fake news on the social media app. In the past, the company sent over a UK-based representative to appear on his behalf. Paul-Olivier Dehaye, co-founder of personaldata.io, has been fighting to force the company to comply with European data protection law, according to The Guardian:
“In the technical argument they’re shooting themselves in the foot, because they’re saying they’re so big the cost would be too large to provide me data.” In effect, Dehaye said, Facebook told him it was too big to regulate. “They’re really arguing that they’re too big to comply with data protection law, the cost is too high, which is mind-boggling that they wouldn’t see the direction they’re going there. Do they really want to make this argument?”
It’s too late for Americans who may have been affected by use of their information, but Facebook provides data used in campaigns all over the world, as well as being the only route onto the internet for many in India and other East Asian countries, in which mobile devices come loaded with Facebook as a browser via the company’s Internet.org app.
“Will Facebook be able to rebuild trust?” the Financial Times (FT) asked in an article on the company March 23, reminding investors that $60 billion of stock value disappeared this year. Most recently, the stock dropped after the scandal-plagued company began dealing with the renewed public relations crisis after a whistleblower spoke about Cambridge Analytica’s use of data from 50 million users. Their “psychographic data” became revealed by a company that polled Facebook users and “scraped” their friend’s data without consent. Facebook had Cambridge Analytica sign an agreement to delete the data—but it didn’t.
FT quoted Sandy Parakilas, a Facebook employee in charge of compliance and data protection from 2011-2012, saying that he warned the company about losing control to outside developers—particularly foreign states and data brokers. Testifying before a United Kingdom parliamentary committee this week, Parakilas said his advice was ignored.
In a story in January, FT wrote about how Max Schrems, an Austrian lawyer, filed a class action lawsuit in the European Court of Justice against the company for its use of personal data, but he wasn’t allowed to represent 25,000 people in it—just himself.
David Carroll, an American professor, also sued Cambridge Analytica in the United Kingdom for invasion of privacy, due to its use of “surveillance” of regular people in compiling intimate details of their lives. He claimed that the Facebook app didn’t explicitly obtain permission from everyone whose data was shared, even though they may have allowed for sharing through third parties. (The EU law will still apply to Great Britain because it occurred before Brexit.)
Facebook stock fell to its lowest point since its IPO, and the price to earnings ratio fell to 18 times, an extreme low for a tech company.
According to Bloomberg Businessweek, the Federal Trade Commission has begun investigating whether Facebook violated terms of a consent decree over use of personal data after the news hit of Cambridge Analytica’s access to 50 million user accounts.
The fact that Facebook deceives users with its button that says, “Why am I seeing this?” recently gained attention as yet another thorny aspect of the breach, according to Bloomberg. Often the answer given to users by Facebook is untrue, obscured or blocks the user’s ability to get more information about the ads and why they were chosen to receive them.
Bloomberg tested how its reporter Vernon Silver received ads when he “liked” an anti-immigrant candidate and then received a pop-up ad. When the button about “Why” was clicked, it said, “wants to reach people 13 and older.” Obviously, that was only partially true.
A study funded by the National Science Foundation confirmed that the answers provided by Facebook were either “incomplete” or “misleading” because the researchers had access to the data Facebook was concealing. Data compiled by Pro Publica last year showed that some older users were excluded from employment ads, leading to the conclusion that the ads support age discrimination.
Personal data has always been shared to social media outlets, according to Analytics India Magazine. The uproar now involves ethics violations and alleged abuse of data by researchers for political marketing. Retail stores like Target routinely collect consumer information (pregnancy shopping online causes the store to issue maternity ads, for instance). Spotify tracks user’s musical tastes and Google offers suggestions based on past searches.
The difference in Cambridge Analytica’s case, however, is that the data accessed had been meant for research. Instead, it was used as a means of earning $800,000 to help fuel a political campaign and manipulate the public through fake news generated by Russian trolls—especially egregious because the data was taken without the knowledge of the users.
Dominant web companies have been causing economic disruption for years, according to Jaron Lanier, author of “Who Will Own the Future?” He points out that data has been used for behavioral targeting techniques by advertisers, which only crosses a line deemed unacceptable when it involves major decisions, such as what to choose for housing, elections and financial decisions⎯such as the ad that only targeted people of a certain age for employment.
Lanier’s book offers several suggestions for improving privacy, as reported by Analytics India:
◼︎ Additional laws and regulations are required to restrict the misuse of social media data
◼︎ Private organizations should create positions for Chief Privacy Officer to ensure data privacy
◼︎ To mitigate risks on such a massive scale, policymakers should propose a Consumer Privacy Bill to restrict the usage of data by big tech giants
◼︎ Governments should borrow the European Union’s General Data Protection Regulations (GDPR) guidelines to ensure user privacy
Writers for UK-based The Guardian have been highly critical of Facebook’s “Free Basics Initiative,” which provides limited internet access in 65 countries, as digital colonialism. In light of the scandal, a recent Guardian writer demanded regulating the company and others like it that exploit user data.
Headlined, “Big Data for the People: It’s Time to Take It Back from Our Tech Overlords,” the piece by Ben Tarnoff advocates restricting data mining or requiring tech companies to compensate people for use of their data—comparing it to oil rights.
“Like any extractive endeavor, big data produces externalities. The extractors reap profits, while the rest of us are left with the personal, social and environmental consequences. These range from the annihilation of privacy to algorithmic racism to a rapidly warming climate. The world’s data centers, for instance, put about as much carbon into the atmosphere as air travel.”
A longer version of the article will be available at https://logicmag.io in upcoming the “Scale” issue.