Commentary

Phillipian Commentary: Seeing is Not Believing

“We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time,” Obama told us grimly in a 2018 broadcast. Except he’s not the one really talking. Though his voice and motions seem authentic, the video is an example of a deepfake, an AI technology available to the public. Deepfakes, which use artificial intelligence to superimpose images of one person on another, have risen rapidly in popularity over the last few years. It’s time we paid more attention to their harmful potential so that we could minimize the damage it could do.

Given enough video and audio footage, deepfakes algorithmically transplant one person’s face onto the movements of another. As such, perhaps one of the most significant concerns is that of political manipulation. I believe speech is one of a politician’s most important assets since what they say usually has massive influence over their voters. After all, they’re meant to represent their constituents, meaning that what they say ought to reflect the opinions of the people who voted for them. Now that deepfake technology is advanced enough to closely simulate the speech and movements of the person, it could become easy in the future to discredit a politician with a doctored video of them with inflammatory and offensive rhetoric (in the video referenced above, Mr. Obama called President Trump a “total and complete dipsh*t”). According to Fortune, the number of deepfake videos has nearly doubled in the last seven months to a total of 14,000. Such videos could render the already polarized political environment even more chaotic and unpredictable. In the future, edited videos of politicians released by malicious outsiders could wreak havoc.

However, when it comes to deepfakes, most videos are not political in nature. According to Vice, pornography accounted for over 96[a][b] percent of all deepfake videos online, all presumably without the consent of the victim. Although political actors may harness the technology in the future to spread disinformation and distrust, the use of deepfakes, as of now, remains primarily to torment and harass women. Whether it be actors or artists, “The New York[c] Times” reports that “the main victims of the fake videos are women.” We all know or can imagine how creepy it would be for strangers to take pictures of you—now imagine how exposed and vulnerable victims of deepfakes are. A celebrity’s image could be tarnished by deepfake videos, but this becomes even more problematic for normal civilians. If such images or videos of them exist on the internet, potential employers may be less willing to offer them jobs. Take Rana Ayyub, who discovered videos of her online after she wrote an article critical of India’s ruling party last year. The video was shared over 200,000 times on Twitter, and she says that the traumatizing experience made her “remain silent[d]” for fear that someone would share the video again. Not only did this video succeed in harassing her and prevent further criticism of the government, but many who would not have known about her before will now see the video instead of her article whenever they search her up.

Although various algorithms are being developed to test for deepfakes, technological advancement is a perpetual game of cat and mouse with no true victor. Every time an antivirus is updated, hackers find a new way to bypass it. Much in the same way, others will find a loophole every time another method of detecting deepfakes is found. Instead, change must occur amongst the general public if it is to occur at all. It’s imperative for us to first and foremost reinforce how unacceptable it is to create and spread these videos. For instance, Andover can take the initiative by incorporating this material into its Mentors in Violence Prevention programs or its Empathy, Balance, and Inclusion courses. Though these programs already cover a wide breadth of topics, issues such as using doctored photos or videos to blackmail or harass others are not talked about. Since the internet will continue to play a bigger role in the social lives of students over the next few years, such topics are critical and must be addressed. And, in any case, some students at Andover may be targeted by these attacks, either in the near future or after they graduate. Deepfakes are, at heart, nothing more than a form of malicious slander. Let’s do our part by taking a stance against them.