Recent studies have suggested that large language models (LLMs)—like OpenAI’s ChatGPT or Google’s Gemini—might exhibit political biases. These

Rozado’s Visual Analytics

submited by
Style Pass
2025-01-24 00:00:06

Recent studies have suggested that large language models (LLMs)—like OpenAI’s ChatGPT or Google’s Gemini—might exhibit political biases. These studies typically rely on standardized political orientation tests to assess such biases. However, forcing AI systems to select one from a predefined set of allowed responses to a query does not accurately reflect typical users’ interactions with AI systems. Political bias in LLMs is likely to manifest in more nuanced ways in long-form, open-ended AI-generated content. In any case, any method of probing for political bias in AI systems is likely amenable to criticism.

In a new report, I address these challenges by combining four different methodologies into a single aggregated score of AI systems political bias. This integrative strategy mitigates the individual weaknesses of each approach and perhaps offers a more robust measurement of political bias in AI.

Base LLMs (Foundation LLMs): Models pretrained from scratch to predict the next token in a sequence using a feed of raw web documents. Base LLMs tend not to follow user instructions well, and thus are not typically deployed for interacting with humans. Instead, base models serve as a starting point for developing conversational LLMs.

Leave a Comment