It’s Getting Harder to Spot a Deep Fake Video (Boomberg)
A deepfake video or audio can easily be mistaken for genuine recordings and these videos are being seen more often, usually involving outlandish stories about celebrities, famous business people or politicians. The problem is that they are so realistic it can be impossible to tell them apart from the real thing, leading to an explosion in fake news. Not only this, they can be used for less than wholesome purposes to disrupt, discredit and deceive.
The term deepfake has been coined by combining fake with deep learning and the video and audio offerings are produced using artificial intelligence (AI). Although the ability to do this was once limited to the secret services and Hollywood special effects departments it is now possible for anyone to with a little ability to go online, download the software and start producing deepfake videos.
The use of machine learning, AI and deepfake technology sounds like something from a sci-fi movie but now it really can be as easy as downloading some software from the internet. The videos produced are doctored to show someone doing or saying something when the reality is that it never happened. This used to be the province of the CIA and MI5 but even home-made deepfake videos are extremely convincing.
Of course, the technology can be used fairly innocently too, for example making people say silly things or posting someone’s face onto a different body but sadly it is just as easy to create a deepfake video or audio that can destroy lives. This may sound dramatic but imagine the panic if an emergency broadcast suddenly announced that there was the threat of a missile attack or natural disaster was imminent. What would happen if someone decided to superimpose your face onto a compromising video and send it to a loved one.
In political terms these movies and other deepfake techniques can be used to sway public opinion for or against anyone. If a deepfake video drops a few days before an election showing a politician doing something illegal or immoral it is not hard to see how that could impact the result.
We can see that exploiting human psychology through using deepfake technology can give people with malicious intent a lot of influence. We are already aware of the way that false news spreads rapidly online, often faster than the fact check sites can keep up. People see this news, accept it as fact, and by the time reality is revealed they have moved on to the next thing.
Through the use of generative adversarial networks (GANs) those who wish to use deepfake technology set two machine learning models against each other. One of these machine learning models will use a data set to create a forgery, the other will try to detect it. The more data is in the training set accessed by the machine learning producing the forgery the better and more believable the deepfake will be. In the early trials, there were videos of Hollywood celebrities and politicians used to demonstrate how easy it is to generate them, but now there are even tutorials available online to teach anyone how to produce a potentially dangerous deepfake video.
Deepfake is also being used in some of the social media apps. For example, face-morphing technology and face swaps have had some popularity among social media users. Now there are free tools like FakeApp which allow anyone to seamlessly manipulate images with very little sign that it has been tampered with.
The answer is, ‘with great difficulty’: Detecting good quality deepfakes can be very hard. Of course, the amateurish attempts are obvious and some sophisticated machines can be used to detect some of the other, less obvious, signs that a video has been manipulated. However, because the GANs used are constantly improving it will soon mean that the only way to detect them will be by using digital forensics and even then there is no guarantee that this will be a success.
Deepfake is also being used in some of the social media apps. For example, face-morphing technology and face swaps have had some popularity among social media users. Now there are free tools like FakeApp which allow anyone to seamlessly manipulate images with very little sign that it has been tampered with.
Jeremy Corbyn urges voters to back Boris Johnson for Prime Minister in disturbing deepfake video (The Telegraph)
The Defense Advanced Research Projects Agency (DARPA) is now providing funding for research into new methods to identify deepfake video however, it seems that GANs can currently learn how to bypass this type of technology so the outcome of this research is uncertain.
If we cannot find a way of establishing what is and is not fake we will soon be in a position where we cannot trust anything we see. The internet now pervades every area of our lives, deepfake will be coming into our homes and workplace and it could be that there will no longer be such a concept as ‘the truth’. One of the concerns is that over time we will lose all concept of reality. However, many people believe that this is an exaggerated claim. Perhaps our greatest advantage is the amount of hype we have seen around deepfake technology has made us more aware – an awareness that could well be our only protection.
Deepfake Roundtable: Cruise, Downey Jr., Lucas & More – The Streaming Wars | Above the Line
Politicians are famous for telling lies so their vulnerability to the use of deepfake technology is probably less than for many people. However, a more credible threat is to the creation of pornography where an innocent person’s head is superimposed on a body in a porn movie. All someone needs to do this is to obtain images and video of an individual – perhaps from a social media platform – and then superimpose this onto the relevant pornographic material.
Many celebrities have already experienced having their head superimposed onto various porn stars in compromising positions and expressed a degree of upset and horror. These are people who are used to being in the glare of the spotlight, imagine how distressing this could be for an ordinary member of the public.
Deepfake technology can indeed be used to deceive populations but they are also well-suited to those who want to bully or harass others. This is probably the most likely use-case scenario; rather than trying to undermine democracy, it will be used to ruin the lives of innocent people.