Mirror Material at Southern Exposure (SF)

I’m honored that my AI-inspired painting Frequency Shift has been included in the Mirror Material group show at Southern Exposure, the nearly 50-year old artist-centered non-profit space in San Francisco’s Mission District. The show is curated by the national artist-run collective Tiger Strikes Asteroid and is up Oct 22 - Nov 19, 2022.

Photo credit: Minoosh Zomorodinia

The approaches of the fifteen artists in the show are diverse, all connecting to the curators’ theme of “pushing against strict categories of media and identity to playfully explore what it means to inhabit multiplicity.” My painting explores the multiple layers of perception in human and artificial intelligence, literally layering two perceptual processes atop one another.

Highlighting the algorithmic similarities between brains and computers in transforming light into visual experience, the painting’s foreground is an array of square filter gradients that vary in spatial frequency and phase, referencing the “edge detection” filters present in the early layers of both biological and artificial neural networks. 

For the background imagery, I used an AI segmentation algorithm to partition a landscape image into separately defined visual regions, akin to the way an artist may squint to see the forms in an image. The output of a segmentation algorithm is a flat color-coded map, reminiscent of paint-by-numbers, a pre-computer algorithmic approach to human painting. My initial vision was to use a landscape photo from my everyday life, but as I learned to use semantic segmentation algorithms, I kept getting more pleasing tree segmentations on my test image, the 1565 painting The Harvesters by Bruegel. (I stumbled upon this test image after reading a nice technical tutorial which used it as input to a segmentation algorithm.)

The reasons why this Breugel landscape painting worked better than my landscape photo aren't entirely clear. It’s known that AI algorithms operate best on images that are similar to those the algorithm was trained on, and things can go awry when they are deployed in novel situations "in the wild" (which was the inspiration for my series name "Edge Detectors in the Wild"). These segmentation algorithms tend to be trained on very modern, often indoor or urban datasets (I used the ADE20k training set), with little focus on nature, because they are generally used for self-driving cars, security cameras, etc.

As I spend more time with The Harvesters, which The Met calls “the first modern landscape painting,” I am reflecting on how Breugel was working in an era where painting scenes from ordinary life—such as harvesters in the field—was a novelty, and would have been ideal training input for AI algorithms for the 1565 pastoral reality. Further, in an inverse to Breugel, who translated a 3D scene to his 2D painting using his era’s new innovation of linear perspective, today’s new innovation of AI vision algorithms attempt to infer a 3D scene from a 2D image. Hence, I keep coming back to the tree from the segmented Bruegel painting, and have ended up making a dozen art works inspired from it.

Previous
Previous

Open Studio in the 120710 arts complex (Berkeley)

Next
Next

REWORK_ at the Museum of Modern Art (NY)