O ver the past  three years, celebrities have been appearing across social media in improbable scenarios. You may have recently caught a grinning Tom

The Double Exploitation of Deepfake Porn

submited by
Style Pass
2021-06-11 02:30:04

O ver the past three years, celebrities have been appearing across social media in improbable scenarios. You may have recently caught a grinning Tom Cruise doing magic tricks with a coin or Nicolas Cage appearing as Lois Lane in Man of Steel. Most of us now recognize these clips as deepfakes—startlingly realistic videos created using artificial intelligence. In 2017, they began circulating on message boards like Reddit as altered videos from anonymous users; the term is a portmanteau of “deep learning”—the process used to train an algorithm to doctor a scene—and “fake.” Deepfakes once required working knowledge of AI-enabled technology, but today, anyone can make their own using free software like FakeApp or Faceswap. All it takes is some sample footage and a large data set of photos (one reason celebrities are targeted is the easy availability of high-quality facial images) and the app can convincingly swap out one person’s face for another’s.

To date, mainstream reporting on deepfakes has emphasized their political danger. Outfits from the Washington Post to the Guardian have warned that the videos could, by eroding trust in media, create chaos. For Forbes, deepfakes threaten to be “a widely destructive political and social force.” Yet, in over three years of the practice, we have yet to see a single credible disinformation effort linked to the technology. Political deepfakes certainly exist. In one video, an AI-generated Barack Obama calls Donald Trump “a total and complete dipshit.” In Belgium, a political party circulated a deepfake of Trump mocking the country’s participation in the Paris climate agreement. Here in Canada, one user took footage of a Trump speech and replaced the former president’s face with that of Ontario premier Doug Ford. While these examples caused a stir, none presented a genuine national security risk. This is not to say that these fears are completely unfounded. The breakneck speed at which deepfakes are improving—often in disturbing new directions, including cloning voices—make it possible that they will be successfully weaponized politically. For the moment, however, they are not being used as feared. In warning about a crisis that doesn’t yet exist, headlines are erasing the damaging way the technology is actually being deployed: almost entirely to manufacture pornography.

Leave a Comment