Seeing was Believing: Part 1

There is a tradition in the United Kingdom every year when Christmas arrives. The Queen addresses the nation on national television. But last year was a bit different. An “alternative” message was delivered by the Queen on BBC Channel 4 [1]. Let’s take a look: 

Given the astounding nature of this video, I am sure you have guessed that this message was in fact not delivered by the real Queen. Despite knowing this, we can’t stop ourselves and wonder if our eyes have deceived us, even if it was for a split second. Such videos are popularly known as DeepFakes and they are taking the internet by the storm.

Observing this increasing popularity of DeepFakes, Turning Magazine is starting this new year with a blog mini-series titled, “Seeing was Believing”. The aim of this series is to raise awareness about DeepFakes and how one can fight against them. In Part 1 of this series, I will introduce an up-and-coming area of research called Media Forensics. In Part 2, we will dive into DeepFakes, understanding their origin and its impact over us. 

Introduction to Media Forensics

Every year key innovations dominate the tech space which generate a significant amount of hype around themselves, and 2020 was no different. From recent advances in natural language understanding with OpenAI’s GPT-3 model to the groundbreaking research in protein folding with DeepMind’s AlphaFold [2], it’s needless to say that this year has seen some breakthroughs, especially in AI. But not every advancement in AI is for the good of humanity. We are living in an era of misinformation, fueled by fake news and media content. And sadly, AI has played a huge role in this. With the rise of DeepFake technology (Figure 1), all of us are left to wonder: is “seeing is believing” even relevant in today’s time?

Figure 1. The growing interest in DeepFake technology via Google Trends (keyword = “deepfake”)

Rise of Fake Media Content

Fake media content, images or videos, go back as far as the existence of digital media. In its simplest form, fake images or videos are nothing but changes made to its original (real) version which results in a depiction that is not true in reality. This fundamental concept has not changed over time, only the techniques and tools which are employed to make these changes as realistic as possible. With this point of view in mind, we should consider all the existing computer-generated movies or movies which use CGI/VFX as fake media, too. Well, not exactly. In today’s socio-political environment, the topic of fake media is more nuanced. We need to consider other aspects such as its potential to spread misinformation and potentially cause harm to individuals in society. 

Given the rise of AI-assisted generation of fake media content, there are typically two major categories for fake images or videos: CheapFakes and DeepFakes. We already know that DeepFakes are a recent advancement in AI which use deep neural networks (specifically, Generative networks) to create manipulations in original (real) media and they are getting more realistic than ever. And then there is the other category: CheapFakes. Although quite recently coined, this type of fake media has been around for a very long time. This category of fake media creates manipulations through conventional techniques such as Adobe Photoshop or even MS Paint. If you cut out a celebrity’s face from a newspaper and stick it on someone else’s photo in a manner which makes it appear realistic, you have made a CheapFake!

Both types of fake media share an equal potential to cause serious damage to our society and democracy by spreading misinformation. Nina Schick, a leading author in the realm of DeepFakes, discusses this very issue in the MIT Technology Review. She highlights that the year 2020 belonged not just to the DeepFakes, but also the CheapFakes [3]. However, we are not completely helpless in this fight against misinformation. The creation of fake media brought about the creation of a group of researchers who develop new methods and technologies to detect and monitor the usage of fake media for malicious purposes. This niche area of research came to be known as Media Forensics. Its relevance in today’s society only increases day after day alongside the usage of fake media on the internet, especially on social media platforms, to harm or defame an individual.

Impact of AI in Media Forensics

If I could boil down the crux of media forensics, I will claim that this area of research aims to assess the fidelity of any media content in focus. For added clarity, whether a certain image or video is portraying the truth and not misleading its audience. Given the digital nature of media content, pixels are the basic building blocks of images and also videos (we ignore the audio component for now). This unequivocally means that to assess the fidelity of digital media, we must investigate the pixels of an image or a video in question. Now as the tools and techniques improve to manipulate pixels in such a way that it appears true to its audience, so will the technology to detect and monitor them.

In a pre-AI era, such manipulations were possible via software tools such as Adobe Photoshop. Several operations like copy-moving or slicing were possible where elements of a source image could be transferred to a target image, thus creating a fake image portraying a lie. As real images and videos are captured through a camera, pixel-level manipulations are possible only in post-processing, i.e. after an image or video is available in its digital form. Such manipulations create, what researchers in media forensics call, artefacts. Artefacts are discrepancies in fake media which can be exploited to design detection technology. But with the use of AI, these artefacts are getting harder to detect through conventional methods. The reason behind this is that conventional methods use manual, hand-picked features which can represent a certain discrepancy. 

Now, if such discrepancies evolve to be more intricate, the process of hand-picking features inevitably fails. Hence, media forensics quickly adapted to this paradigm shift in fake media generation and adopted the use of AI to develop so-called DeepFake detection techniques to regain the ability to exploit discrepancies left behind by DeepFakes. Luisa Verdoliva recently published a survey paper which provides a brilliant overview of the ongoing research in media forensics, with more focus on DeepFake detection [4]. It is highly recommended for any reader interested in knowing more about this research area.  

Figure 2. A growing amount of research into DeepFakes (generation as well as detection) via Web of Science (keyword = “deepfake”)

Future of Media Forensics

This era of misinformation has just begun. With new emerging technologies and social media platforms, we have to accept the fact that anything we now see online should be viewed with a high level of skepticism. But like I mentioned before, we are not helpless against this threat. Figure 2 shows a heuristic representation of academia steadily increasing their contribution to media forensics. It’s only a matter of time before major tech companies and government bodies fund more research into this field. It might seem like a dark age for truth but there’s definitely a bright future for media forensics.

Note to Reader: This article was Part 1 of “Seeing was Believing” Blog Series. Part 2 will cover a more in-depth discussion about DeepFakes: how they originated, how they are used in the world and more!

References:

[1] Deepfake queen to deliver Channel 4 Christmas message. BBC News. 23 December 2020. https://www.bbc.com/news/technology-55424730

[2]  Dormehl, L. (2020), A.I. hit some major milestones in 2020. Here’s a recap. Digital Trends. https://www.digitaltrends.com/features/2020-ai-major-milestones/ 

[3]  Schick, N. (2020), Don’t underestimate the cheapfake. MIT Technology Review. https://www.technologyreview.com/2020/12/22/1015442/cheapfakes-more-political-damage-2020-election-than-deepfakes/

[4] Verdoliva, L. (2020). Media forensics and deepfakes: an overview. arXiv preprint arXiv:2001.06564.