“The Pythia, also known as the Oracle of Delphi, was a priestess of the Temple of Apollo at Delphi in ancient Greece. The temple was considered to be the most important oracle in the ancient world, and the Pythia was believed to be able to communicate with the god Apollo and deliver his prophecies to those who sought her advice. The Pythia would sit on a tripod over a chasm in the temple and inhale vapors that were believed to rise from the earth. The fumes were thought to give the Pythia the ability to enter a trance-like state and communicate with the god Apollo. The pronouncements that the Pythia made while in this state were considered to be the words of Apollo himself and were highly respected by the ancient Greeks. The oracle at Delphi was highly respected and consulted by many important figures in ancient Greece, including kings, generals, and even the philosopher Socrates. The oracle was also consulted on important matters such as the founding of colonies, declaring war, and making political decisions. It is important to note that there are many theories about the nature of the oracle and the role of the Pythia. Some historians and scholars have suggested that the oracle at Delphi was a religious institution that offered guidance and solace to the ancient Greeks, while others have suggested that it was a political institution that was used by powerful figures to justify their actions. The temple of Apollo was active from 8th century BC until the 4th century AD, and was eventually closed by the Roman emperor Theodosius I as part of his efforts to suppress paganism.”
Apollo killing Python. A 1581 engraving by Virgil Solis for Ovid‘s Metamorphoses, Book I (from Wikipedia)
The above text was spit out by chatGPT (OpenAI’s Generative Pre-trained Transformer 3 (GPT-3). It’s too tempting to make a comparison between these AI systems and the oracles of old, whether directly or indirectly, and I’m not the only one to do so.
This chat bot, which has deep learning, has been a revolution, a game-changer, an accelerant for the oncoming mechanization of white-collar industries, as well as raising a bunch of money and engendering a firestorm of similar technologies. It also has the usual predictable cadre of detractors. Some thoughts / scraps of thoughts about all of this is below. Some of this text also appeared in the exhibition I co-curated called Every Day We Have to Invent the Reality of This World: AI Post Photography.
ChatGPT was released in beta/demo form in late-2022 and followed a slate of AI-image generators such as Dall-e, midjourney, stable diffusion. There also has been muchado about the implications of the imaging systems on all imaginable aspects of art, law, culture, etc. A number of artists have also been incorporating AI into their art practices, some with astronomical success, and critical responses have spanned the gamut as well, with one notable critic calling Refik Anadol‘s MoMA exhibition akin to an “extremely intelligent lava lamp”. Phrases used by Anadol, and perhaps to some extent popularized by him, have been “hallucination” or “dreaming” when referring to the image output of these systems.
Generative AI’s hallucinatory tendencies when transforming text into images or video sequences recall the disjointed and enigmatic terrains of our own dreams. Generative AI takes text prompts and maps them to visual outputs based on its own learned associations. But the connection is rarely a one-to-one match. AI draws from its vast reservoir of patterns to create something that can sometimes seem eerily accurate or wildly off the mark. AI, much like the human unconscious, doesn’t produce replicas but rather makes interpretations. Our dreams too operate on a logic where the connections between images, sensations, and feelings are not always immediately accessible to our conscious minds. They are informed by our memories, desires, fears, and the myriad bits of sensory input we receive daily. Both AI hallucinations and human dreams appear as systems trying to navigate vast terrains of information and produce meaning, however elusive that meaning might sometimes be.
Installation view of Every Day We Have to Invent the Reality of This World: AI Post Photograph at UCR ARTS.
“So we beat on, boats against the current, borne back ceaselessly into the past.” – from a famous book by a famous author
The relentless recycling of trends, tropes, and images is evident in the workings of modern mass culture. The past can be a source of comfort and escape; it’s a “known” place and space. Many movies, shows, and records are deeply engrained into our personal memories and biographies all the while serving as an overarching connection to culture at large. The past is extremely profitable too. The success of past media blockbusters serves as a template for future development, causing a cascade of countless sequels, reboots, and actors brought back to life through CGI.
Enter, then, AI image systems. Trained upon our vast, accumulated reservoirs of cultural imagery, these systems are machines put to the task of magnifying our recursive tendencies. We may find ourselves trapped in a cultural Möbius strip, where rather than driving us forward, AI reinforces a perpetual feedback loop of the past. A feedback loop that often declaws many of the radical, experimental, and avant-garde movements that challenge (or resist assimilation into) the dominant cultural hegemony of a given moment. The question we are confronted with then is whether we are consigning ourselves to a cultural future that is forever trapped in a skewed and simplified version of its past. The words of the late scholar, author, and critic Mark Fisher’s come to mind: “Those who can’t remember the past are condemned to have it resold to them forever.”
We’re very much breathing in the fumes of generative AI, but yet to divine whatever ultimate prophetic wisdom we’re to gain from it. Maybe getting intoxicated off the fumes is as far as we’ll ever get.