Facebook has confirmed that it will remove deepfake and other manipulated videos from its platform, in what seems to be a u-turn for the social networking giant.
Deepfakes are computer-generated content (typically video clips) that can be surprisingly realistic. These videos often use politicians or celebrities to do or say something they wouldn’t do or say in real life.
Such is the concern around the technology, that in 2018 Darpa (a US Defence Department research agency best known for backing early work on the internet and self-driving vehicles) began to seek ways to develop technologies aimed at spotting artificially generated images and videos.
Facebook confirmed that it would ban deepfake videos in a blog post.
This is despite it reportedly previously refusing to take down deepfake videos.
“People share millions of photos and videos on Facebook every day, creating some of the most compelling and creative visuals on our platform,” said Facebook. “Some of that content is manipulated, often for benign reasons, like making a video sharper or audio more clear. But there are people who engage in media manipulation in order to mislead.”
“Manipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or ‘deep learning’ techniques to create videos that distort reality – usually called “deepfakes’,” said Facebook. “While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases.”
“Going forward, we will remove misleading manipulated media if it meets the following criteria,” said the social networking giant.
Facebook said removing content would apply if a video has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
And it would remove content if it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
But Facebook also said that this policy change “would not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”
And the social networking giant told Reuters, its new policy would not see it remove a heavily edited video that last year attempted to make US House Speaker Nancy Pelosi seem incoherent by slurring her speech and making it appear like she was repeatedly stumbling over her words.
“The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed,” Facebook told Reuters. “Only videos generated by artificial intelligence to depict people saying fictional things will be taken down.”
Facebook said that once the video of Speaker Pelosi was rated by a third-party fact-checker it had reduced its distribution, and critically, people who saw it, tried to share it or already had received warnings that it was false.
In May last year, Speaker Pelosi slammed Facebook over its refusal to takedown the video, a refusal she said that convinced her that social networking giant knowingly enables Russian election interference.
The European Commission has also previously rebuked Facebook, Google and Twitter in 2019 over their efforts to crack down on fake news.
All three firms had signed the EU Code of Practice against disinformation, and were asked to report monthly on their actions.
In November 2019 Twitter announced that it was banning all political advertising worldwide.
Twitter’s move put Twitter in direct contrast to Mark Zuckerberg, after he said that Facebook would not ban political adverts, but would rather clamp down on false information about voting requirements.
Meanwhile Facebook’s apparently u-turn over the issue of deepfake video has been noted by some security experts.
“After having previously refused to take down deep fake videos, this is an interesting announcement by Facebook,” said Javvad Malik, security awareness advocate at KnowBe4.
“However, this in itself will not achieve much,” said Malik. “The fact that parody and satire is excluded, could mean that most people could argue that any flagged video is merely intended to be satire.”
“Secondly, the issue of fake news, or manipulating the facts that people are exposed to, goes beyond deep fake videos,” he said. “Facebook should also consider its stance on whether or not it will vet political ads or other stories for accuracy.”
“Finally, there are ways in which videos can be manipulated without the use of deep fake technology,” said Malik. “Splicing together reactions from different shots, changing the audio, or even the speed of a video can drastically alter the message the original video intended to give.”
“Rather than trying to find and ban deep fakes, Facebook could have considered placing a big watermark on deep fake videos which indicate it is a computer generated video and not real,” he concluded. “But that needs to be done against more than just deep fakes if it is to make any measurable difference to the proliferation of fake news.”
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…
US prosecutors confirm earlier reports, demand Google sells off Chrome web browser and end default…