Deepfakes: the manipulated (but very realistic) videos which haunt the web

Artificial intelligence technology allows to create fake videos which are increasingly difficult to detect. And to make anybody say whatever you want on camera


A brand new spectre is haunting the Internet: the so-called deepfake videos.

Basically, that is an artificial intelligence technique used to manipulate images, in order to superimpose a different face on another person’s body, or to change the lip movements on a speech to make the person say words or sentences he never actually said.

It’s not exactly clear when this technology was invented.

The word itself, anyway, was created in late 2017 by a Reddit user, who adopted the pseudonym deepfakes to publish a fake porn video purporting to feature Wonder Woman actress Gal Gadot.

That first example was followed by tens of other doctored videos, published online, featuring all kinds of celebrities.

One of the most famous is the video where Mark Zuckerberg appears to define himself “one man with total control of billions of people’s stolen data, all their secrets, their lives, their futures”.

The same thing happened to Kit Harington, Jon Snow from Game of Thrones, who in his fake video apologizes for the way his TV series ended, and even to high-profile politicians such as Donald Trump or Barack Obama.


Doctoring videos is easier and easier


The concept of editing images, sometimes for satirical purposes, sometimes for misinformation, of course isn’t new: people have already been retouching photos with Photoshop for decades.

The difference is that the most recent softwares can obtain increasingly sophisticated results with an increasingly easier process. Programs such as FakeApp, which can already be downloaded from any store, allow to replace the face of a person in any video.

Even Samsung recently unveiled an algorithm that can create fake videos of anybody, based on a single image of them.

Deepfake technology works with two AI algorithms, which together create a generative adversarial network: one is the generator, the other is the discriminator.

The former creates the fakes, the latter tells them from real videos.

Early attempts at deepfakes were fairly easy to spot, for example because people in it didn’t blink their eyes at the natural frequence.

But as discriminators got better at finding fakes, generators got better at creating them: the result is that the the latest generation’s technologies have become more realistic than ever.

It’s not always possible to tell if a video is a fake just by viewing it.


How to defend from deepfakes


The good news is that technology can also, in most cases, identify manipulations by looking for flaws that can’t be easily fixed, hidden among the pixels of specific frames.

One of the latest deepfake detection algorithms, created by USC Information Sciences Institute in California, claims to reach a 97% accuracy.

Another technological solution to avoid fakes is hashing: a form of digital watermarking, a short string of numbers attached to the video file, which is lost if the video is tampered with. But no one of those techniques can entirely remove the risk of deepfakes.

In fact, the United States Congress has already set off a warning bell in view of the 2020 presidential race, which might be strongly damaged by fake videos.

In the last few months not one, but two bills have been proposed: the Malicious Deep Fake Prohibition Act of 2018 and the Deep Fakes Accountability Act.

The legal measures that could be taken to avoid the potential negative impact of deepfake video contents are already in discussion, but their possible side effect is a limitation on freedom of speech on the Internet.