The biggest concerns over AI today are not about dystopian visions of robot overlords controlling humanity. Instead, they’re about machines turbocharging bad human behavior. Social media algorithms are one of the most prominent examples.
Take YouTube, which over the years has implemented features and recommendation engines geared toward keeping people glued to their screens. As The New York Times reported in 2019, many content creators on the far right learned that they could tweak their content offerings to make them more appealing to the algorithm and drive many users to watch progressively more extreme content. YouTube has taken action in response, including efforts to remove hate speech. An independently published study in 2019 claimed that YouTube’s algorithm was doing a good job of discouraging viewers from watching “radicalizing or extremist content.” Still, as recently as July 2021, new research found that YouTube was still sowing division and helping to spread harmful disinformation.
Twitter and Facebook have faced similar controversies. They’ve also taken similar steps to address misinformation and hateful content. But the initial issue remains: The business objective is to keep users on the platform. Some users and content creators will take advantage of these business models to push problematic content.