Published by The Lawfare Institute in Cooperation With
In August, then-Republican presidential candidate Donald Trump posted that Vice President Kamala Harris had shared a fake photo from one of her campaign rallies featuring an AI-generated crowd. Even as reporters and attendees readily verified the existence of the 15,000-person crowd, Trump continued to cast doubt in the days that followed. But with few safeguards on Big Tech’s use and development of generative AI (genAI), often there is no easy way to tell what’s real and what’s fake.
In 2018, law professors Danielle Citron and Robert Chesney warned of the “liar’s dividend,” the phenomenon that results when the spread of deepfakes and similar reality-mimicking technologies make the public equally as skeptical of real information as they are of fake. But six years later, Big Tech has forged ahead, racing to put out genAI tools that have exacerbated the problem. Even as democracy hangs in the balance, without guardrails, Silicon Valley has little incentive to change.
For decades now, the law has largely reacted to new tech, sometimes long after that tech has harmfully impacted everything from the individual social media user (consider how social media has affected youth eating disorders) to the national grip on democracy (think of how targeted advertising has enabled political campaigns to craft messaging for hypertailored audiences). With each new technology that comes out of Silicon Valley, it becomes clearer that reactively legislating and litigating is an insufficient approach to regulating an industry that—without safeguards—presents threats to democracy and public welfare.