Disturbed Images
Lamia Priestley October 1, 2024
A roll of belly fat melts into a makeup-caked face; a bag of chips morphs into a family portrait; a butt cheek transforms into a policeman’s bicep. Gross and sickly, loud and pink, Frank Manzano’s collection of video works, Current Value (2023), uses AI imagery to depict the grotesque in everyday scenes of American suburban life—fistfights, plastic surgery, arrests.
Rapid cuts between faces result in pile ups of interchangeable characters. An endless treadmill of trash, plastic consumer products, and open mouths, the videos’ choppiness creates what the Chicago-based artist describes as “the human parade”—a crazed illustration of people as stuff. Manzano describes his work as an exploration of “consumerism, massification, the loss of the self.” These themes are felt not just in the works’ subject matter but in the evidence of the mass market AI tools Manzano uses to make many of his images.
Manzano has a fondness for corpulent characters, big butts, cellulite, stretched thighs and teethy smiles. Outside of his focus on flesh, the videos themselves exert a materiality in their reference to the aesthetics of consumer visual culture and their artefacts. There’s the digital lines of security footage, the jacked up saturation of reality TV, the studio lighting of an 80s sitcom, the low res crunchy feel of camcorder home videos. The images look either consumer-grade (camcorder) or like something used to capture consumers (CCTV). But amongst the visual styles referenced in Manzano’s work, one can distinguish something entirely new, the artefacts of AI images.
To create this pastiche, Manzano uses a combination of his own photos, sourced images and ones he generates with AI image tools like Wombo. On a visual level, the artefacts of AI images—the airbrushed smooth skin, confused edges between fingers, gibberish logos and half-baked eyes—contribute to a feeling of the uncanny in these illustrations of the American underbelly. More than just a formal contribution though, these artefacts place the images into a context. Viewers who, over the past few years, have developed a familiarity with AI generated images online, will recognise them here and have some understanding of the process by which they are created—using a dataset of preexisting images. It’s clear that although Current Value’s images have the trademarks of documentary or recorded reality, many of them aren’t real.
“There’s something profane in the limitation of a finite dataset. The images have no hope of transcendence.”
The disturbing nature of Manzano’s videos is due, not to this irreality, but rather to the images’ self-aware embrace of their artificial generation. The AI artefacts are significant because of what they mean in the context of Manzano’s unaspiring video world. Not only are his subjects debased but so too are the images—a perfect marriage of subject and form.
The philosopher Hannes Bajohr offers a useful framework for understanding this earthbound quality of AI images in his article Algorithmic Empathy: Toward a Critique of Aesthetic AI (2022). Bajohr advocates for interpreting AI arts on their own terms, looking at how they’re made—their “technical substrate”—to develop their aesthetic critiques. Bajohr draws a parallel between artificial neural networks and the ancient aesthetic principle of mimesis—the attempt to imitate or reproduce reality in the creation of art. He outlines two opposing concepts of imitation as it relates to AI images. The first he attributes to the philosopher Hans Blumenberg’s explanation. It describes imitation as construction, which sees “the approximation of an existing state through the inference of the rules that bring it about.” The second concept of imitation, which better describes AI image making, is “imitatio naturae” (imitation of nature). A classical idea that was repopularised in the Renaissance, “imitatio naturae” sees imitation as a mere repetition of the real without the “procedural insight” of imitation as construction. In the case of AI image making, “nature” would be the dataset, and so that from which all representations are derived.
This first approach to imitation, that of construction, implies the possibility of depicting something new. Bajohr emphasises that with the knowledge of a thing’s creation—of its building blocks—moving beyond that thing is possible while in “imitatio naturae”, the representation derives directly from the thing itself. Nature, and so the dataset, is the absolute resource. An artificial neural network can’t truly imagine anything beyond its own dataset, never something outside of that which has already been represented.
There’s something profane in the limitation of a finite dataset. The images have no hope of transcendence. Image generators are so far unable to replicate the mysterious process by which a great artist goes about transforming an ordinary landscape into an image that might produce ineffable revelations in its viewers. The artist—studying how light falls, the relationships of colours, the phenomenon of perspective—might inexplicably assemble a few strokes of paint to reveal something much greater than valleys, woods, hills and streams, much greater than nature.
Manzano’s images are trapped. His subjects lead unambitious lives, marginalised by a cycle of consumerism, greed, lust, violence, and vanity; they're governed by their instincts, unable to escape themselves. So too, AI images exist only in immanence. Image generators simulate master artists’ styles from the past, merely recycling them, destined to make unambitious copies.
Current Value’s images are provocations. They’re affective, disturbing representations of the gutters of material culture because they themselves belong there, unable to dream themselves out.
Lamia Priestley is an art historian, writer and researcher working at the intersection of art, fashion and technology. With a background in Italian Renaissance Art, Lamia is currently the Artist Liaison at the digital fashion house DRAUP, where she works with artists to produce generative digital collections.