Commentary

Minimizing Damage of “Deepfakes”

A Reddit user by the name of “deepfakes” has successfully created an AI script capable of compiling source images of celebrities and superimposing their faces onto pornographic videos. The scary part is that the low resolution and soundless clips generated by the program are virtually indistinguishable from real videos. In addition, the user did it using open-source machine learning tools that are readily available for public use.

“Oh my!” I thought, as I shut off my computer, “this isn’t science fiction anymore — it isn’t even some big budget Elon Musk Tesla space rocket thing.” With the introduction of the deepfakes app, the idea is that anyone with a decent graphics processing unit can create videos of whoever they want doing whatever they want them to do in their fantasies. Currently, as the program requires at least tens of hours of video and thousands of source images to create a believable clip about seven minutes long, we aren’t quite at that level of technological prowess — yet. Nonetheless, it is about time we consider and address the implications of this technology.

Recently, mainstream sites like Twitter, Reddit, and even Pornhub have decided to ban the AI-generated fake videos due to an alleged violation of their policies. They have all deemed these deepfakes non-consensual, illegal, and “involuntary pornography.” Many internet activists have rejoiced at the withdrawal of the content, flipping a middle finger at the perverts who developed this devil’s playtool and the freaks who wanted to watch Emma Watson strip-teasing. Personally, however, although I feel like deepfakes themselves are not a good thing, the spread of the technology used to create them is — hear me out.

The truth is that this technology has already been out for a long time. Just look at the mind-boggling CGI in blockbusters! The only new information that these deepfakes reveal is how accessible this technology has become, as previous professional big-budget labor has been transformed into pretty much a one button operation. This knowledge came as a shock to many people, and has incited an outbreak of journalists and YouTubers attempting to cover this issue.

The realization that such technology is so readily available is a harsh but crucial awakening. This way, when the technology actually falls into evil hands (let’s say, if a politician tries to use deepfakes to simulate a scandal of his opponent), the media and the people will be more alert and less prone to manipulation. Victims of blackmail will not have their reputations tarnished without further proof of their alleged actions.

The opposite may be true as well; if someone films an influential figure doing something abhorrent, they may be able to absolve themselves by claiming that the video has been faked using machine learning. However, the more the general public knows about the technology used to create deepfakes, the better we will become at distinguishing real videos from fake ones. We must actively spread awareness of the existence of this technology to curb the potential damage that it can cause.

As computational costs plummet and programs become more optimized, it won’t be long before it becomes impossible to distinguish real from fake. Since almost everyone has a Snapchat, Instagram, and Twitter nowadays, an amateur using the program can effectively use public photos on their target’s social media to create videos intended for blackmail or revenge. In this sense, though the deepfakes are a violation of consent laws and should be banned, the spread of them did us some good: I can at least inform you all about the technology that enables the creation of the videos.