Richard Mathenge felt he’d landed the perfect role when he started training OpenAI’s GPT model in 2021. After years of working in customer servic

The Horrific Content a Kenyan Worker Had to See While Training ChatGPT

submited by
Style Pass
2023-05-29 11:00:03

Richard Mathenge felt he’d landed the perfect role when he started training OpenAI’s GPT model in 2021. After years of working in customer service in Nairobi, Kenya, he was finally involved in something that felt meaningful and held a future for him. But the position left him scarred. For nine hours per day, five days a week, Mathenge led a team that taught the A.I. model about explicit content. The goal was to train it so it could keep such things away from users. Today, it remains stuck with him.

While at work, Mathenge and his team repeatedly viewed explicit text and labeled it for the model. They could categorize the content, the provenance of which was unclear, as child sexual abuse material, erotic sexual content, illegal, nonsexual, or some other options. Much of what they read horrified them. One passage, Mathenge said, described a father having sex with an animal in front of his child; others involved scenes of child rape. Some were so offensive Mathenge refused to speak of them. “Unimaginable,” he told me.

The type of work Mathenge performed has been crucial for bots like ChatGPT and Google’s Bard to function and to feel so magical. But the human cost of the effort been widely overlooked. In a process called “Reinforcement Learning from Human Feedback,” or RLHF, bots become smarter as humans label content, teaching them how to optimize based on that feedback. A.I. leaders, including OpenAI’s Sam Altman, have praised the practice’s technical effectiveness, yet they rarely talk about the cost some humans pay to align the A.I. systems with our values. Mathenge and his colleagues were on the business end of that reality.

Leave a Comment