MIT’s Algorithm to Find Fake News Stories Gains Dataset Assist
The descriptive terms “echo chambers” and “filter bubbles” became part of the conversation among analysts after the 2016 U.S. Presidential elections demonstrated how the polarization of voters gained momentum due to “fake” news spread by bots and Russian hackers.
According to a study funded by the European Research Council and conducted in collaboration between Dartmouth College, Princeton University and the UK’s University of Exeter, one in four Americans visited a fake news website from October 7-November 14, 2016. Almost 6 in 10 visits to fake news websites came from the 10% of people with the most conservative online information habits. Facebook ads sent many people to the bogus news sites.
In response, researchers at the Massachusetts Institute of Technology have developed AI technology to identify machine-generated fake news. The “Grover” model has been operating for several months, but it’s improving in being able to detect manipulated content. Researchers from the Allen Institute for Artificial Intelligence and the University of Washington Paul G. Allen School of Computer Science and Engineering are working on further developing the tool.
According to a story in Forbes magazine, the tools are arriving none too soon. Already the Trump for President re-election campaign has been promoting fake news on Facebook about one of his rivals in the Democratic primary, Joe Biden. Facebook refuses to do anything to stop the misinformation campaign.
“The most recent scandal to break in the age of fake news is once again linked to the Trump administration, relating to an allegedly false campaign ad about Joe Biden that was distributed on Facebook and Twitter – and which the social media giants refuse to remove,” Forbes reported. “The advertisement, claiming that Biden coerced Ukraine to fire a prosecutor targeting his son Hunter has been refused by some news outlets due to inaccuracy, but arguably today’s most influential distributor of news, Twitter and Facebook, have invoked their policies to support their decisions to continue hosting malevolent false claims.”
Two papers released on October 15 from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) show how the AI tools will have an impact. The first paper focused on advances in Grover in identifying computer-generated fake news, which spreads quickly on social media. The other describes fact-verification methods that rely on the FEVER dataset (the largest dataset for Fact Extraction and Verification in text) by using a different approach to verify claims and showing how bias can thwart verification algorithms. It might help limit the spread of false information in the first place.
The Grover algorithm identified an AI-generated article that accurately described findings by NASA scientists was found to be false, because it was generated by a machine. The FEVER dataset is problematic because the database is flawed. Researchers developed at new dataset that had biases taken out and through which the model improved with positive and negative reinforcement.
“True claims with the phrase ‘did not’ would be upweighted, so that in the newly weighted dataset, that phrase would no longer be correlated with the ‘false’ class,” says paper author Darsh J Shah, allowing incorrect classifications to be rectified over time.
As the Trump campaign, Facebook and Twitter continue to spread fake news, tools to identify and fight against it are more important than ever.