By Will Fenstermaker
January 30, 2021
In the Digital Age, photographer Ernst Fischer undermines algorithms by overwhelming them with information. His body of work in microphotography and cinematography is concerned with the limits of the camera and computer, apparatuses that, in the end are just machines. They cannot reproduce human vision.
In a 2016 studio visit, critic Will Fenstermaker interviewed Fischer about his work with algorithms, and especially “the idea of image that’s holy because it was made without human hands.” They discuss Fischer’s microphotography that was on view in 18%, a solo exhibition at CUE Art Foundation in 2015, as well as his contemporaneous work that focused on the Shroud of Turin. At the crux are questions about freedom, cybernetic surveillance, and iconography.
Today, as society grapples with the limits of algorithms and their power in shaping culture, a new type of governance is taking shape to regulate the circulation of images, where unelected social media officials, like Facebook’s “supreme court,” determines who has access to images and in what context. This interview with Fischer extrapolates one model for breaking the rules to change the script, reminding the reader that beyond these algorithmically pre-determined landscapes there are other possible worlds.
—Lune Ames, Managing Editor
When I met Ernst Fischer in his shared Sunset Park studio, he had just traded one of his photographs for two bottles of homemade mead, which he opened upon my arrival. Over the first few glasses, I told him how I came across his work in 2015 at CUE Art Foundation, and he explained how he moved from filmmaking to photography in his mid-twenties by falling into advertising work. After working for years in London and Berlin, where he grew tired of the pressure to print his photographs in series, Fischer came to New York to undermine his own practice.
In the corner of his studio, Fischer showed me the microphotography rig he built to make the images exhibited at CUE in his solo exhibition titled 18%. Photographs of minerals that appear to drip out of their frames—in one, a cubic shard of lead disintegrates into a fractal rendering, as if a microscope had looked and caught molecules in the midst of rearranging. Visions of tectonic fabric constructed through digital processes. Constructing the machine that gave sight to these visions was a “full-on arms race,” he said. The hulking contraption of disassembled cameras, repurposed bellows, microscope lenses, and tracks for sliding the various components into place is used only for the first step of Fischer’s work. Much of his work, including series begun after 18%, is done on the computer, where he subjects the raw data of his photographs to all sorts of contortion and manipulation. He forces the images through algorithms, undermining what they’re designed to do, and feeds the machine information it’s not equipped to handle, attempting to expose the inner workings of programs we take for granted; through this process, he hopes we can learn something about the way we live with them.
The day we talked, Fischer was in the later stages of a new project. He was testing screen components that would overlay artificial shrouds of Jesus’s face and struggling to obtain a fractal pattern that appealed to him. We sat before the shrouds with our mead and discussed the role of the human hand in art production, and the changing perspective within the vision machine.
Will Fenstermaker: You started out as a photographer doing advertising work for some major companies.
Ernst Fischer: Yeah, advertising photography, pretty high end, but all along I had this secret practice that kept me alive mentally. I also did my fair share of “good, liberal” work, sort of for spiritual carbon credits. I’d go to places like Ukraine and Uzbekistan, partly because I thought I wanted to document things, but partly just because I was drawn to being on the edge of the planet.
The most intense project I had on the side, up until 2010, was this village on the Romanian border in Ukraine that’s basically collapsing into holes. It’s an old salt mining village; they’ve been mining salt for 150 years underneath the village. These huge salt domes are dug out of the street and it’s all getting washed out, so they just have these little holes opening in the streets and within a couple of weeks of forming, they grow into giant funnels and houses start falling into them, sliding down into these abysses. It was a really great place to do the opposite of advertising.
WF: And then you went to Columbia for an MFA in visual arts.
EF: While I was there I was just kind of undermining myself. I was there to try and get out of production mode, you know, really dig underneath it. Make it all collapse into a hole.
I guess what happened was that I came back to art school asking, “What can I do with this quasi-straight photography that I know, and what can I do with the typologies I hate?” I hated the way photography is always pushed towards seriality, and is seemingly always a matter of entertaining a relation between individual images.
Outside Deutsche became part of my thesis project. I went down to Wall Street for I don’t know how many mornings during rush hour and just hung out outside of Deutsche Bank. It looks like a perfect money fortress. Crystalized in granite, with faceted columns and a deep, dark forecourt where you get these deep shadows and the sun cutting in at 6 or 7 in the morning—that perfect canyon light in New York. I was there just shooting fish in a barrel—on a standard lens, shooting white collar workers as they psyched themselves up going into work.
WF: Did you know you’d be combining them later to make these composite portraits, with the figures laid over each other?
EF: I had hundreds of shots, and my whole premeditated, aggressive stance toward the white collar robot got subverted by the humanity of these portraits. Because I was frustrated by both typologies and humanism, this just wouldn’t do. They needed something else, so I ran them through an algorithm that basically makes frames between frames.
WF: This algorithm is meant for cinema, right? It just extrapolates the data in-between two frames?
EF: It’s an edge recognition and blob extraction algorithm that can read shapes and make a third, synthetic frame between two frames.
WF: What’s the typical use-case for that? To slow things down?
EF: If you want to slow footage down, yeah. Rather than just fading from one frame to the next you make a frame in between.
WF: So you can turn 24 frames per second into 60 or something?
EF: Something. I can stretch it as much as I want. If the compositional difference between the frames isn’t great, then the algorithm is pretty good at mimicking difference. There are a lot of parameters I can tweak. There are local and global tracking parameters, where I can tell it to look at bigger or smaller details, what sort of contrast ranges to interpret as detail and what to interpret as tonally flat. I need to tell it where to grip and how to morph one thing into another.
So, I made these as a reaction against this seriality expectation, where you’re always pushed to put photographs in relation with each other.
WF: In order to create some sort of narrative arc.
EF: Right, and then the meaning lives between the pictures. So I just ask the computer to extract that meaning and then see what happens.
WF: It’s interesting that you used an algorithm designed for cinema, for creating that moment that isn’t captured on camera, that exists in the interval between frames or where the sprocket catches the film as it runs through the projector. But then, of course, the result is a single photo.
EF: Yeah, that’s how I got my grotesques. Have you ever seen They Live by John Carpenter? Some Kurt Russell look-a-like lives in a slum that gets raided, and he goes into a church where he finds a pair of sunglasses. When he puts them on he can see that half the people on the planet are robots, and all the advertising posters say “Obey,” “Believe,” or “Consume.” But it wasn’t just robots, what emerged were these Otto Dix, Neue Sachlichkeit-type grotesques.
I thought somehow that by harvesting these results, the work could express something about what it means to be involved with machinery.
WF: It becomes clear that the camera records information rather than matter. There’s just raw data that you’re feeding back into itself.
EF: And I’m working with the same algorithm on these shrouds of Jesus that I’m currently doing. These are paintings of Jesus fed through the algorithm and then multi-layered so that there are usually about four Jesus-es in one, but still line-recognized by this algorithm, by a machine that’s been taught to look and gets it wrong.
WF: Did you make the paintings too?
EF: No, I just went to the Met and photographed them.
WF: The Mandylion, a venerated icon of Jesus’s face, was sealed behind a wall to protect it from iconoclasts. When they later took the wall down, the shroud was said to have made an image of itself on the stone. Sounds like photography.
EF: There’s something very originally photographic about the Shroud of Turin and the Veil of Veronica, and the idea of an image that’s holy because it was made without human hands.
WF: Acheiropoieta. It’s a Byzantine Greek word for “made without hands,” and it refers to images, usually of Jesus, that appear miraculously.
EF: Yes. So whether it’s mechanical reproduction or the Holy Ghost that reveals the image is secondary to the result. The less we’re involved as humans, the better. Plus, the Shroud of Turin only really became famous when someone took a negative of it. The face is much more apparent in the negative.
WF: Each time you run the image through the algorithm, do you keep the parameters the same?
EF: No, I kind of mess with it. It’s not a scientific process, it’s just a rules-based process. Of course, clearly, my hand’s involved, I’m just pretending it’s not. It’s all a ruse, right? A ruse of disengagement.
WF: This work makes me think of Vilém Flusser’s conception of the camera as a tool used by an autonomous image-making system, rather than by people.
EF: I think about that a lot, and the idea of being a functionary of the apparatus as a photographer. Flusser loses me when he talks about how a real photographer is going to be able to expand the rules of the game. In The Utopia of Rules, David Graeber, channeling Caillois, makes a distinction between play and games. A game is set. In a game, you’re working within whatever box you’ve made for yourself. You throw the dice, and only one of the six numbers will come up. There’s a pleasure in playing a game that’s very separate from the pleasure of play, which is open-ended and has an oceanic aspect that is absent in a game. A game is pleasurable and comforting because it works with probabilities rather than with potentialities. There are no unexpected outcomes, just more or less likely ones. I think of photography as a game in many ways.
WF: Because the outcomes are entirely predetermined by the tool you’re using, its possibilities are finite?
EF: There’s a set of possible outcomes because you are working with a very specific apparatus. And you’re serving it, you’re just feeding it and harvesting from it. It’s interesting to see what it does. The microphotography genre I got into is very much like that. I gave the machine what it seemed to want, and it seemed to want to photograph certain things at that scale.
WF: For your series 18% you photographed minerals, bugs, and light.
EF: Microphotography is an amateur genre, something 65-year-old men do in their sheds. I spent a year and a half on online forums with these guys building these rigs. They all have their own rigs and it’s all experimental, so nobody else has my combination of optics—none of the men I’ve virtually hung out with have detached the chip from the optics the way I have. I have the rig with the smallest focal increments that I’ve seen out there. It was a full-on arms race that I engaged in.
If you Google “microphotography,” bugs and minerals are what you get. Ninety-five percent of the results are going to be fly eyes and mosquitos and minerals, because these micro-structures appeal to the machine somehow. That’s the genre, and I might as well work within it because it seems to be what the machine wants.
But then I fuck with the machine. I light it in ways that make the algorithm unable to interpret the data correctly. I give it dramatic lighting, which creates these refractions and reflections, artifacts that the program interprets as surfaces, as topographical. But then the machine misrecognizes them, and the results are images of phantoms.
WF: You’re exposing the workings of a black box.
EF: I’m pushing the mechanics of the machine and the lens’ capabilities of magnification to a point where they start losing it, and then I’m also pushing the algorithm itself by feeding it a bunch of funky spectral highlights, optically refracted through my combination of microscope lenses. I give it all sorts of funky lighting, when it expects to just have a smoothly lit fly eye there. If it’s done seamlessly and transparently—as in Baudrillard-ian transparency, unnoticeably—it’s just a perfect hyper-focused shot of a bug or mineral. Instead, I’m pushing every aspect of the machine to the limit where it’s still doing what it’s meant to do, but it begins to crack. It oscillates at the boundary where it won’t work at all; it teeters on that edge and you can see what it’s trying to do. Once it starts going wrong, you can see something about the abstraction.
WF: I think that’s important—to be able to see the camera’s abstract processes.
EF: Abstraction is always a tough term to throw around, but you might call my stuff abstract photography.
WF: I wouldn’t. I guess I think of the camera itself as such an abstract tool, that it becomes a redundant phrase, “abstract photography.”
EF: It does. My work is only abstract in as much as it makes visible what’s abstract about photography in the first place. That’s what makes “abstract photography” abstract photography—you can see the machine function, and you don’t have a perfect emulation of human vision that most photography is calibrated towards. That really is much more abstract than what I’m doing.
WF: Again, I’m thinking of Flusser. He carves out this space—you said this is where he loses you—where we can bring the camera back under the service of humanity, rather than vice versa.
EF: Is that what he thinks? Maybe I should re-read him. So, in the end he seems to say that what he calls a “photographer,” rather than a “functionary,” could somehow use the apparatus to take control of its possibilities again?
EF: Did you read that book, The Blank Swan by Elie Ayache? It’s a sort of Deleuzian finance book. It’s all about being fully in the moment. Standing in the market, he calls it. I think that’s how I got into checking in with these finance people online. It’s a weird paranoid space that rules the world. I think about money as if—and I don’t know if this is just polemic—there’s no going back. It becomes a bit like photography: once you start photographing the world, the world becomes the photographable world.
It’s a very powerful invention; it’s like mis-folded proteins. You know what mis-folded proteins are?
EF: So, BSE, mad cow disease, is not a viral or bacterial disease, it’s just a mis-folded protein. It’s a protein that spirals the wrong way, but it propagates. Once it comes in proximity to other proteins, they start folding too, and it kills.
WF: I don’t see how money or photography are like mis-folded proteins.
EF: Why did proteins spiral for millions of years one way and then when they meet the first mis-folded protein they just fold the other way and everything’s different? There’s no going back. It’s inorganic. Today, we have all of these models of economies of equivalence, but like-for-like trade—a goat being worth two bags of rice—that’s a total fiction that was created after money. It’s a retrofitted precursor to monetized economies, because we can’t imagine another way besides equating things. Everything looks like a photograph now, and everything has its price.
WF: I see now. How long have you been working on these shrouds?
EF: Two months. Recently, I’ve just been testing fabrics. It’s just a little thing, I don’t know where it’s going. Covering the shrouds are screen components from flat screen TVs—I disassemble lots of electronics with my kid. It’s an acrylic screen with a diffusion pattern that evens out the LED light.
I don’t like this big shroud…
WF: Why not?
EF: It looks too much like something you’d wear. Maybe I’ll put a screw in the middle to increase the pressure and give it this fractal folding pattern. They’re very weird, these creases that appear. In the end, it’s visual art, so I make stuff because I can’t talk about it. Art problems are the best problems to have. You can make them as complex and interesting as you want. And if you’re going to make art, you want it to be working on whoever’s seeing it.
WF: I really like this conception of art that does something, opposed to just sitting on a wall. But the idea of then having it monumentalized in a museum can become troubling if we don’t consider what it’s doing there. Who is it really inspiring to act? How political of an act is stroking your chin in contemplation?
EF: The other side of that is what led me to make my video of white supremacist aesthetics using that same line-recognition algorithm. If you get that stuff into a museum it’s like a little Trojan Horse, or Trojan code. Your little soldiers come out at night and destroy the museum. But it takes a weird double-cynicism to do that, to insist on the Trojan function of your work.
I’m showing a new video. It’s just called Lobby Screensaver. I think of it as a little game with the technological sublime. I grew up looking at Gustave Doré and Rodolphe Bresdin, John Martin and his schlocky Romanticism, like The Great Day of His Wrath. That’s my art training in a way, and I don’t want to fight that.
WF: The painting that’s morphing in the video, is that due to the same algorithm?
EF: I really only use two algorithms for my computer-vision works.
WF: Are we seeing different parts of the same painting, or are these different images?
EF: The paintings in the video are different. They just stand for history to some degree. But there’s something here about this kind of “new nature,” this notion of a digital wilderness that replaces what we’re living in. We have online lives, and the principle function of our virtual social being, it seems to me, is to help us get over the fact that we’re wrecking the planet. We’re fucking killing the place and we know it, which is a weird position to be in as a generation.
WF: You’re suggesting that we’re retreating into the cyber world.
EF: The work that I’m doing with algorithms, where I’m letting them do what’s in their nature so to speak, and the way we rely on things like Facebook to tell us what our natural social lives should be—something about all of that means that we have a yearning for a replacement. I was being facetious when I wrote somewhere that my utopian thinking is, “if only machines were happy, they’d leave us alone.”
WF: Which leads into artificial intelligence. We see all of this fear surrounding artificial intelligence as if suddenly, when it achieves singularity, it will see us as a threat. But to have that attitude while we construct these sentient beings is to begin already with something of an antagonistic relationship toward them.
For example, so much imagery today is generated from surveillance. We see such a fraction of it, and yet every frame is fed through some sort of surveillance system. Think of the number of frames taken by every security camera. But that doesn’t mean it’s necessarily the machine that can’t be trusted. What I’m skeptical of is who’s on the receiving end of that information.
EF: It’s insane.
WF: I mean, my example is extreme, because you could say, “Oh, it’s surveillance footage, it’s redundant imagery, it’s just however many frames of an empty convenience store.” But in a more tangible sense you have Bill Gates literally burying photographs underground.
EF: He’s the reason why there’s only that one Hindenburg photo anymore. He won’t sell the others, because he knows how valuable it is.
WF: But they’re here, literally buried underground in Pennsylvania.
EF: A friend of mine at the British Journal of Photography analyzed this on a business level, and found that companies like Corbis have these iconic sets—like the Hindenburg disaster, case in point—which they know they’ll sell more of if they only sell the one image. They decide which is the iconic image—which usually is a given already—and they just put a lid on everything else.
WF: And that image becomes our perception of the event.
EF: That’s it, the event is one image. All people really want is the one icon for each event.
WF: Which might connect in some way back to your frustration of the meaning of pictures living between the images. Where can we find the meaning if we only have access to a single photo?
At its core, it seems to me that what you’re doing is trying to look at how we might be looked at, how we might understand ourselves from the technological vantage point. I think it’s important your influences are Romantic, that they come from these Turner-esque visions of the natural world from a single vantage point. A reality of technological surveillance is that it’s from all angles. So when the vantage point multiplies, what changes?
EF: What do you mean?
WF: It’s a way of looking at what we look at, a way of looking at what we look like with the changing technologic perception. I’m not necessarily positing that you’re taking on the gaze of the machine and saying, “This is what it thinks of us,” or “This is how it sees us,” as much as looking again at our natural world, how we interact with it, what its value is to us when so much of what we are now is what you call a cybernetic surveillance machine.
EF: It’s true. The next step might be to ask, “What does that then tell us about how we see ourselves?” Designing machines that see us, or that try to emulate the way we see, is a way of trying to code the way we think we see ourselves. A sort of reflexivity.
The reason the screen world is so captivating to us is because it’s a feedback mechanism for ourselves, and it’s that instant feedback that’s so satisfying. Imagine if an intelligence that has achieved singularity were to just look at us. Imagine if after we’d done our best to teach it everything we know about ourselves, it just stopped at that, and kept looking at us. And if we could then adopt its gaze, might that help us grasp how little we know about ourselves?
 Singularity is a hypothetical event where an Artificial Intelligence becomes capable of autonomously and continuously improving its own function. A popular concern in machine ethics and science fiction is that, after such an event, Artificial Intelligence would exponentially improve itself far beyond human comprehension, and possibly begin to see humans as obsolete. ”A singularity” is sometimes used to metonymically refer to an AI program that has achieved “singularity.”