Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous
I n January 2021, the artificial intelligence research laboratory OpenAI gave a limited release to a piece of software called Dall-E. The software allowed users to enter a simple description of an image they had in their mind and, after a brief pause, the software would produce an almost uncannily good interpretation of their suggestion, worthy of a jobbing illustrator or Adobe-proficient designer – but much faster, and for free. Typing in, for example, “a pig with wings flying over the moon, illustrated by Antoine de Saint-Exupéry” resulted, after a minute or two of processing, in something reminiscent of the patchy but recognisable watercolour brushes of the creator of The Little Prince.
A year or so later, when the software got a wider release, the internet went wild. Social media was flooded with all sorts of bizarre and wondrous creations, an exuberant hodgepodge of fantasies and artistic styles. And a few months later it happened again, this time with language, and a product called ChatGPT, also produced by OpenAI. Ask ChatGPT to produce a summary of the Book of Job in the style of the poet Allen Ginsberg and it would come up with a reasonable attempt in a few seconds. Ask it to render Ginsberg’s poem Howl in the form of a management consultant’s slide deck presentation and it would do that too. The abilities of these programs to conjure up strange new worlds in words and pictures alike entranced the public, and the desire to have a go oneself produced a growing literature on the ins and outs of making the best use of these tools, and particularly how to structure inputs to get the most interesting outcomes.