This piece will be updated sporadically with additional resources. While we cannot post every link we receive, we encourage the Tech Policy Press comm

Syllabus: Large Language Models, Content Moderation, and Political Communication

submited by
Style Pass
2024-05-07 23:30:08

This piece will be updated sporadically with additional resources. While we cannot post every link we receive, we encourage the Tech Policy Press community to share material that may be relevant.

With the advent of generative AI systems built on large language models, a variety of actors are experimenting with how to deploy the technology in ways that affect political discourse. This includes the moderation of user-generated content on social media platforms and the use of LLM-powered bots to engage users in discussion for various purposes, from advancing certain political agendas to mitigating conspiracy theories and disinformation. It also includes the political effects of so-called AI assistants, which are currently in various stages of development and deployment by AI companies. These various phenomena may have a significant impact on political discourse over time.

For instance, content moderation, the process of monitoring and regulating user-generated content on digital platforms, is a notoriously complex and challenging issue. As social media platforms continue to grow, the volume and variety of content that needs to be moderated have also increased dramatically. This has led to significant human costs, with content moderators often exposed to disturbing and traumatic material, which can have severe psychological consequences. Moreover, content moderation is a highly contentious issue, driving debates around free speech, censorship, and the role of platforms in shaping public discourse. Critics argue that content moderation can be inconsistent, biased, and detrimental to open dialogue, while proponents of better moderation emphasize the need to protect users from harmful content and maintain the integrity of online spaces. With various companies and platforms experimenting with how to apply LLMs to the problem of content moderation, what are the benefits? What are the downsides? And what are the open questions that researchers and journalists should grapple with?

Leave a Comment