axios.com

2018 Propaganda Be Repeat of 2016⎯on Steroids

The success of Russian-paid trolls to infect and impact U.S. elections has proven so successful that others are trying to copy them on an industrial scale.

The efforts made during the last election cycle to stop bad actors have led them to new efforts to avoid more sophisticated detection and to take advantage of new technologies, making some of them harder to identify and stop in real-time. In 2016, their commonly-used tactics focused on spreading fake news widely, loudly and clumsily. Now they’ve updated their efforts with more sophisticated technologies like artificial intelligence (AI).

Russians, bots and social media frauds created domains  such as abc.com.co or usatoday.com.co. Companies spread a Google Doc from media professor Melissa Zimdars widely in late 2016 warning users about these types of domains. But a similar tactic is still used by politicians and political groups to create partisan sites that look independent, but aren’t. Platforms have cracked down on cloaking, a tactic that gets people to click something, often a video, by misleading them about its content.

We’ve seen the trend increase more and more with bad actors using malware botnets than anything else. A few years ago, 60% of non-human activity was malware, but now roughly 75% of bot attacks come from compromised devices (botnets).”
— Tamer Hassan, co-founder & CTO at White Ops, a fraud detection firm

Information abusers seek to imitate normal communications, instead of spreading bright commentary that could get them flagged for spreading hate or violence. Bad actors are creating malware, dark texts, deepfakes (fake photos) and other efforts to steal or affect businesses, regardless of what area the business is in.

Many social media platforms have taken more action to remove the financial incentives for fake political news creators, but a lot of the work done by bad actors in 2016 laid the foundation for even more elaborate attacks in 2018.

read more at axios.com