What Should We Do About AI-Generated Deepfakes?


In this Future View, students were asked to discuss AI deepfakes and the potential consequences they can bring if left unchecked. Deepfakes are computer-generated images and videos that superimpose one person’s face onto another’s body. While some of these deepfakes have gone viral, their potential implications for public confusion are worrying.

Torin Christensen, University of Southern Virginia, discussed how photo manipulation has been around since the Civil War, but even with it this hasn’t stopped public from surviving the spread of misinformation. He pointed out that the people who are most likely to be swayed by deepfakes are those in the middle of the spectrum: moderates or swing voters may change their preferences if the content seems plausible.

Dillon Prochnicki, Georgetown University, also focused on the potential for deepfakes to influence opinions. He argued that, if convincing deepfakes can manipulate public perception, our ability to choose effective representatives will suffer, and that deepfakes will more likely appeal to extremists and only make their bubbles more conspiratorial.

Jacob Ward, University of Utah student believes that the long-term effects of deepfakes are overblown. He argues that the need for authenticity can be met with the use of nonfungible tokens (NFTs). NFTs, or “smart tokens”, are individual digital objects that, through their unique code, can authenticate online content by verifying its original creator.

In conclusion, deepfakes can have a political influence if they play on existing suspicions, as well as swing the opinion of moderates if the content seems plausible. The potential for malicious actors to use AI deepfakes to stir up public anger is concerning, and we must prioritize