AI to the Rescue?
If you spend anytime on line, you most likely have read or clicked on a FAKE NEWS story or posting. As one of the over riding complaints by certain politicians that fake news is produced by fake media, it has been become a very real threat to real life. Part of the fake new problem can be identified in the recent launch of Hillary Clintons book What Happened.
According to Review Meta, a watchdog site that analyzed the 1600 reviews that were posted within hours of the books release. Amazon had deleted nearly 1200 of these reviews as having come from bots or paid negative review writers, or ‘incentivized’ reviews.
Unlike fake news stories that someone writes and then tries to spread virally through social media, artificial reviews work only if they are manufactured in volume and posted to sites where a particular item is sold or advertised. The outcome of such efforts can mean the difference of millions of dollars. Companies like Yelp, which launched in 2004, was founded to provide reviews of products. Yelp has long been aware of fake reviews as they have posted around more than 135 million reviews covering about 2.8 million businesses since launching.
Yelp has been using a machine technique called Deep Learning to filter there reviews. Deep learning requires an enormous amount of computation and entails feeding vast data sets into large networks of simulated artificial “neurons” based loosely on the neural structure of the human brain.
Yelps programs and security were based on human reviewers. Now that automated appraisals are available, its been shown that humans will respond to machine reviews in the affirmative nearly as often as with human reviews. “We have validated the danger of someone using AI to create fake accounts that are good enough to fool current countermeasures,” says Ben Zhao, a Chicago professor of computer science who will present the research with his colleagues next month at the ACM Conference on Computer and Communications Security in Dallas.
Like Yelp, Amazon and other Web sites use filtering software to detect suspicious reviews. This software is based on machine-learning techniques similar to those the researchers developed to write their bogus evaluations. Some filtering software tracks and analyzes data about reviewers such as their computers’ identifying internet protocol (IP) addresses or how often they post. Other defensive programs examine text for recurring words as well as phrases that may have been plagiarized from other Web sites.
There is more to come regarding the battle over what is REAL fake news and what is FAKE Fake news. Luckily articles intended purely to spread falsehoods and misinformation are currently written by humans in order to come off as authentic.
Filippo Menczer, a professor of informatics and computer science at the Indiana University School of Informatics and Computing recently said that “is something that a machine is not capable of doing with today’s technology. Still, skilled AI scientists putting their effort into this not-so-noble task could probably create credible articles that spread half-truths and leverage people’s fears.”