There is a popular line of reasoning in platform regulation discussions today that says, basically, “Platforms aren’t responsible for what

Amplification and Its Discontents

submited by
Style Pass
2021-06-21 23:00:04

There is a popular line of reasoning in platform regulation discussions today that says, basically, “Platforms aren’t responsible for what their users say, but they are responsible for what the platforms themselves choose to amplify.” This provides a seemingly simple hook for regulating algorithmic amplification—the results for searches on a search engine like Google or within a platform like Wikipedia; the sequence of posts in the newsfeed on a platform like Twitter or Facebook; or the recommended items on a platform like YouTube or Eventbrite. There’s some utility to that framing. In particular it is useful for people who work for platforms building product features or refining algorithms.

For lawyers or policymakers trying to set rules for disinformation, hate speech, and other harmful or illegal content online, though, focusing on amplification won’t make life any easier. It may increase, rather than decrease, the number of problems to be solved before arriving at well-crafted regulation. Models for regulating amplification have a great deal in common with the more familiar models from intermediary liability law, which defines platforms’ responsibility for content posted by users. As with ordinary intermediary liability laws, the biggest questions may be practical: Who defines the rules for online speech, who enforces them, what incentives do they have, and what outcomes should we expect as a result? And as with those laws, some of the most important considerations—and, ultimately, limits on Congress’s power—come from the First Amendment. Some versions of amplification law would be flatly unconstitutional in the U.S., and face serious hurdles based on human or fundamental rights law in other countries. Others might have a narrow path to constitutionality, but would require a lot more work than anyone has put into them so far. Perhaps after doing that work, we will arrive at wise and nuanced laws regulating amplification. For now, I am largely a skeptic.

In this essay, I will lay out why “regulating amplification” to restrict distribution of harmful or illegal content is hard. My goal in doing so is to keep smart people from wasting their time devising bad laws, and speed the day when we can figure out good ones. I will draw in part on novel regulatory models that are more developed in Europe. My analysis, though, will primarily use U.S. First Amendment law. I will conclude that many models for regulating amplification face serious constitutional hurdles, but that a few—grounded in content-neutral goals, including privacy or competition—may offer paths forward.

Leave a Comment