How to mitigate the impact of deepfakes
#1
Lightbulb 
Quote:
[Image: rsa2020-deepfakes-mitigation-featured.jpg]

With deepfakes becoming more and more common — and more and more convincing — how can you protect your business?

Deepfakes are just one unfortunate product of recent developments in the field of artificial intelligence. Fake media generated by machine-learning algorithms have gained a lot of traction in recent years. Alyssa Miller’s talk at RSA Conference 2020, titled Losing our reality, provides some insights on why it’s time to consider deepfakes a threat — election year aside — and what your business can actually do to mitigate the impact if it’s attacked in such a way.

How deepfakes are made

The most common approach to creating a deepfake is using a system called GAN, or generative adversarial network. GANs consist of two deep neural networks competing against each other. To prepare, both networks are trained on real images. Then, the adversarial part begins, with one network generating images (hence the name generative) and the other one trying to determine whether the image is genuine or fake (the latter network is called discriminative).

After that, the generative network learns, and learns from the result. At the same time, the discriminative network learns how to improve its performance. With each cycle, both networks get better.

Fast forward, say, a million training cycles: The generative neural network has learned how to generate fake images that an equally advanced neural network cannot distinguish from real ones.

This method is actually useful in many applications; depending on the preparatory data, the generative network learns to generate certain kinds of images.

Of course, for deepfakes, the algorithm is trained on real photos of certain people, resulting in a network that can generate an infinite number of convincing (but fake) photos of the person ready to be integrated into a video. Similar methods could generate fake audio, and scammers are probably using deepfake audio already.

How convincing deepfakes have become

Early deepfake videos looked ridiculous, but the technology has evolved enough at this point for such media to become frighteningly convincing. One of the most notable examples of frighteningly convincing deepfakes from 2018 was fake Barack Obama talking about, well, deepfakes (plus the occasional insult aimed at the current US president). In the middle of 2019, we saw a short video of fake Mark Zuckerberg being curiously honest about the current state of privacy.

To understand how good the technology has become, simply watch the video below. Impressionist Jim Meskimen created it in collaboration with deepfake artist Sham00k. The former was responsible for the voices, and the latter applied the faces of some 20 celebrities to the video using deepfake software. The result is truly fascinating.

As Sham00k says in the description of his behind-the-scenes video, “the full video took just over 250 hours of work, 1,200 hours of footage, 300,000 images and close to 1 terabyte of data to create.” That said, making such a video is no small feat. But such convincing disinformation can potentially have massive effects on markets — or, say, elections — which makes the process seem frighteningly easy and inexpensive.

For that reason, almost at the same time that the abovementioned video was published, California outlawed political deepfake videos during election season. However, problems remain. For starters, deepfake videos in general are a form of expression — like political satire. California’s ban doesn’t exactly protect freedom of speech.

The second problem is both technical and practical: How exactly are you supposed to tell a deepfake video from a real one?

How to detect deepfakes

Machine learning is all the rage among scientists all over the world, and the deepfake problem looks interesting and challenging enough to tempt many of them to jump in. For this reason quite a few research projects have focused on how to use image analysis to detect deepfakes.

For example, a paper published in June 2018 describes how analyzing eye blinks can aid in the detection of deepfake videos. The idea being that typically not enough photos are available of a certain person blinking, so neural networks may not have enough to train on. In fact, people in deepfakes at the time the paper was published were blinking far too rarely to believe, and though people found the discrepancy hard to detect, computer analysis helped.

Two papers submitted in November 2018 suggested looking for face-warping artifacts and inconsistent head poses. Another one, from 2019, described a sophisticated technique that analyzes the facial expressions and movements that are typical for an individual’s speaking pattern.

However, as Miller points out, those methods are unlikely to succeed in the long run. What such research really does is provide feedback to deepfake creators, helping them improve their discriminative neural networks, in turn leading to better training of generative networks and further improving deepfakes.
...
Continue Reading
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)
[-]
Welcome
You have to register before you can post on our site.

Username/Email:


Password:





[-]
Recent Posts
QOwnNotes 19.1.6
24.12.4 The wel...Kool — 12:56
INTEL Arc Graphics 32.0.101.6325/6253 dr...
Highlights Fix...harlan4096 — 11:06
GFYI [Official] Revo Uninstaller Pro v5...
"Share feedback...damien76 — 09:01
GFYI [Official] SpyShelter PRO v15 Chri...
Merry Christmas and ...damien76 — 08:56
GFYI [Official] IObit Christmas 2024 Bl...
Merry Christmas and ...damien76 — 08:54

[-]
Birthdays
Today's Birthdays
No birthdays today.
Upcoming Birthdays
No upcoming birthdays.

[-]
Online Staff
There are no staff members currently online.

>