In June, I wrote that to build trust, platforms should try a little more democracy. Instead of relying solely on their own employees, advisory councils, and oversight boards, I wrote, tech companies should involve actual users in the process. Citing the work Aviv Ovadya, a technologist who recently published a paper on what he calls “ platform democracy,” I suggested that social networks could build trust by inviting average people into the policymaking process.
I didn’t know it at the time, but Meta had recently finished a series of experiments which tried to do just that. From February to April, the company gathered together three groups across five different countries to answer the question: what should Meta do about problematic climate information on Facebook?
The question came as watchdogs are increasingly scrutinizing the company’s approach to moderating misleading information about the environment. Last year the Guardian reported on an analysis performed by the environmental group Stop Funding Heat that found 45,000 posts downplaying or denying the climate crisis. And in February, after Meta promised to label climate misinformation, a report from the watchdog group Center for Countering Digital Hate found that “the platform only labeled about half of the posts promoting articles from the world's leading publishers of climate denial,” according to NPR.