If an image could be described as baleful, Trevor Paglen’s “Rainbow” would fit the bill. Apart from the toxic-looking “sky,” parts of it appear to have mutated into the fiery trace of munitions or, more cryptically, a sequence of glitches. The full title of the work, “Rainbow (Corpus: Omens and Portents),” suggests the collation of natural elements and a physical, possibly dead body (corpus/corpse), further bolstering the overall impression of estrangement and trepidation.
Along with other works in Paglen’s Adversarially Evolved Hallucinations series (2017–ongoing), including the monstrous “Vampire (Corpus: Monsters of Capitalism)” and “Human Eyes (Corpus: The Humans)” — the latter complete with the apparition of deceptively unseeing eyes — “Rainbow” was produced by a generative adversarial network (GAN), an AI model that trains neural networks to recognize, classify and, crucially, generate new images. Given that AI image-processing models do not experience the world as we do, but rather replicate a once-removed and askew version of it, the images they produce reveal the degree to which AI computationally generates disquieting allegories of our world. Emerging from an embryonic space of automated image production, images such as “Rainbow” disclose that which is usually hidden or otherwise obscured — an apparition, or a nightmare, that is indebted to the hallucinatory, often erroneous logic of the algorithms that power AI. Often dismissed as a fault or glitch, this logic is nevertheless a fundamental aspect of AI, not merely a side effect. All of which leads us to an over-arching question: Do these hallucinatory models of AI-powered image production have the capacity to further estrange, if not profoundly alienate, us from the world?
An acclaimed artist, and a recipient of a MacArthur Fellowship, Trevor Paglen has consistently engaged with the invisible and occluded in our world, including, in his words, “the invisible visual culture” of machine-made images that remain indecipherable to the human eye.