Community Reviewers Increasing
If you have spent any time on Facebook or any other social media platform for that matter, then you have probably seen deepfake postings. Perhaps you spotted them right away. But most likely you didn’t. Deepfake technology has become very effective in promoting almost any agenda, whether true or false, especially in politics.
Facebook has stepped up in banning deepfakes after being called out by several people and House Energy and Commerce committees. An article we found on venturebeat.com by Kyle Wiggers points out the changes that are coming to Facebook.
Facebook announced that it will strengthen its policy on misleading videos identified as deepfakes — those that take a person in an existing image, audio recording, or video and replace them with someone else’s likeness. While the ban won’t extend to parody, satire, or video that’s been edited solely to change the order of words, it will affect a swath of edited and synthesized content published for the purpose of tricking viewers.
In a blog post confirming an earlier report from the Washington Post, Facebook global policy management vice president Monika Bickert said going forward Facebook will remove media that’s been modified “beyond adjustments for clarity or quality” in ways that “aren’t apparent to the average person.” Content generated by machine learning algorithms that merge, replace, or superimpose people will also be subject to deletion, she said.
Deepfake videos that aren’t removed might be subject to review by Facebook’s independent third-party fact-checkers, which now span over 50 partners globally in over 40 languages.
The AI driven ability to implant your face into a porn video or place you at the scene of a crime by altering footage on security camera videos is getting better and deepfakes are multiplying quickly. Amsterdam-based cybersecurity startup Deeptrace found 14,698 deepfake videos on the internet during its most recent tally in June and July, up from 7,964 last December—an 84% increase in only seven months.
Detection Challenge
In an effort to keep deepfakes from spreading, Facebook — along with Amazon Web Services (AWS); Microsoft; the Partnership on AI; and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park and State University of New York at Albany — are spearheading the Deepfake Detection Challenge, which was announced in September. It launched globally at the NeurIPS 2019 conference in Vancouver last month, with the goal of catalyzing research to ensure the development of open source detection tools.
The winner of this contest will win big money. It’s driven in a large part by concerns about smearing candidates in the upcoimg 2020 elections. Watchdog groups say deepfake programs have helped sway numerous elections worldwide.
read more at venturebeat.com
Leave A Comment