A few days ago, OpenAI released an open-source language model for the first time in a very long time. 1   It had been promised for a while, but the

Society, Technology and Culture

submited by
Style Pass
2025-08-07 11:30:04

A few days ago, OpenAI released an open-source language model for the first time in a very long time. 1 It had been promised for a while, but the deadline kept being pushed for “safety” concerns. 2

In fact, they’ve put quite a bit of time and effort into discussing safety 3 , because, ostensibly, safety and ethics is at the top of people’s minds.

So, the public is worried about AI ethics, and OpenAI is putting efforts into making sure the AI is ethical. Sounds like a match.

Not just a match, but a great talking point. When the press or someone issues a question or challenge around ethics, they can point to the work they’re doing around that very subject, and superficially the questioner is shut down.

Except that’s not what people actually mean when they say “ethics” 4 . People are far more concerned with the real-world implications of ethics: governance structures, accountability, how their data is used, jobs being lost, etc. In other words, they’re not so worried about whether their models will swear or philosophically handle the trolley problem so much as, you know, reality. What happens with the humans running the models? Their influx of power and resources? How will they hurt or harm society?

This isn’t the first time this “redefining a legitimate concern” tactic has been used in tech. Way back, in the one thousand nine hundred and 90s, telemarketer calls were even more ubiquitous than they are now, and puzzled recipients would often ask “how did you even get my number?”

Leave a Comment
Related Posts