By                                 Josh Simons and Eli Frankel

Why democracy belongs in artificial intelligence

submited by
Style Pass
2023-03-18 18:30:26

By Josh Simons and Eli Frankel February 21, 2023

In the last year, the art and literature worlds have seen new and rather unorthodox debuts. An up-and-coming writer churned out poetry, fiction, and essays with unprecedented speed, and several new visual artists have been generating other-worldly images and portraits. These artists are not people but artificial intelligence systems that appear, on the surface at least, to actually be intelligent.

Appearances can be deceptive, though. Behind the venture funding, sleek keynotes, and San Francisco high rises that generate systems like ChatGPT, there is a more straightforward kind of reasoning: prediction. Try typing something into it . What you see is not a system that understands, internalizes, and processes your request before producing a response. It’s generated by a neural network—layers of algorithms that have learned how to predict useful outcomes from all the text on the web. It looks like understanding, like watching an original author in action, but it isn’t. It’s prediction, an exercise in mimicking. Even some of the most complicated “AI” systems out there are actually powerful forms of machine learning in which algorithms are learning to predict particular outcomes from the patterns and structures in enormous datasets.

This matters because most of the real harms these systems can cause—but also the opportunities they can afford—are nothing to do with robots taking over the world or self-generating AI systems. They are to do with what, how, when, and why we should use powerful predictive tools in the decision-making systems of our political, social, and economic organizations. How, if at all, should we use prediction to decide who gets a loan or mortgage, which neighborhoods police officers are sent to, which allegations of child abuse and neglect to investigate, or which posts to remove or results to display on Facebook? We shouldn’t expect the answers to be the same across different questions. The moral and political questions about the use of data-driven prediction in policing are often extremely different from the questions about its use in the allocation of credit, and both are different still from its use to shape and moderate the public sphere. This means the policy solutions we develop to regulate the organizations that use data to make decisions, whether simple linear models, machine learning, or even perhaps AI, should be quite different in policing, finance, and social media companies.

Leave a Comment