Skip to main content

The essay below appears in the catalog of Digital Capture, an exhibition I co-curated exploring the Southern California origins of digital imaging in the 1960s and the proliferation of that technology across science/media/arts in the subsequent decades.

Our curatorial team has quipped since the very onset of this project (in early 2020) that Digital Capture:Southern California and the Pixel-Based Image World will be out of date the moment it opens. That has held true, perhaps to an extent none of us imagined. What has been reaffirmed over the past four years, however, is the interwovenness between California and digital imaging.

During the COVID-19 pandemic, hardware and software conceived, designed, and/or developed in California became massively more relied-upon tools for communication, entertainment, and political engagement. Lockdowns and quarantines pushed about half of the global population to rely on digital solutions like telemedicine, virtual learning, and remote work. A Pew Research Center study found that 90 percent of Americans considered the internet essential during this time.[1] But a digital divide grew as well, as the pandemic highlighted and intensified existing inequalities in digital access, particularly affecting students from disadvantaged backgrounds[2] and amplifying challenges for older populations in accessing health care and staying connected in general. The compulsory digitalization that occurred during the pandemic also escalated surveillance and data collection, making opting out of the technology-laden world more difficult than ever before.[3]

Each stage of technological integration into a population, workforce, et cetera reflects not just a step forward, but also a moment to reflect on the variety of pathways technology can take. The trajectories of a technology such as digital imaging can develop in ways that are as unpredictable as they are transformative. This invites us to take a closer look at not only the technology and its users, but also at the socio-cultural-political milieu that surrounds it. Wolfgang Ernst helps us articulate a line of inquiry known as media archaeology that is “an epistemologically alternative approach to the supremacy of media-historical narratives. . . . This means that when media archaeology deals with prehistories of mass media, this ‘pre-’ is less about temporal antecedence than about the techno-epistemological configurations underlying the discursive surfaces (literally, the monitors and interfaces) of mass media.”[4] Media archaeology is not only the exploration of the material and historical conditions of the creation of software and hardware, but also, as Jussi Parikka states, “a conceptual and practical exercise in carving out the aesthetic, cultural, and political singularities of media. And it’s much more than paying theoretical attention to the intensive relations between new and old media mediated through concrete and conceptual archives.”[5] Media archaeology explores the ideas and imaginaries behind the development of new media and technologies while also noting the various false starts of technological progress.

Following media archaeology more broadly, Digital Capture resists reducing the seemingly inevitable emergence of the “pixel-based image world” into a technologically deterministic narrative. Jussi Parikka and Erkki Huhtamo also remind us that “dead ends, losers, and inventions that never made it into a material product have important stories to tell.”[6] And Noah Wardrip-Fruin adds that “the media archaeology approach has often unearthed forgotten moments from predigital media and, bringing them into the present media context, has both seen them anew and used them to illuminate the media culture of today.”[7] The investigative approach and framework of media archaeology has been a guide for Digital Capture in its efforts to elucidate manifold digitally induced seismic disruptions in the personal, political, and ecological spheres.

My intent in this essay is to expand on a few of the themes, individuals, and technologies that served to ground and develop Digital Capture. The following pages also highlight one particular trajectory of digital images over the past few decades, beyond their ubiquity: they have also been a substrate for new technological developments. Digital imaging untethered animation, graphic design, photography, the cinematic arts, and essentially every other visual form from its physical constraints, ultimately remaking them as virtual data points as opposed to fixed photochemical artifacts. The end point of this transmutation was the flattening of visual material into “content”—the reframing of media into components to be distributed, reinterpreted, and reimagined for a multitude of aims. One particular aim has been the training and development of artificial intelligence (AI) systems that themselves have begun creating images. As Joanna Zylinska poignantly writes, “The distinction between image capture and image creation is now increasingly blurred.”[8] Likewise the distinctions between devices that create, capture, or manipulate images. Phones double as personal computers, automobiles can accept incoming calls and texts, and everything seems to be laden with cameras and other image-capturing devices. The pixel, as the single unit of visual digital data, becomes the intermediary between all of these, bridging disparate devices and applications in a seamless continuum of digital interaction.

Untethering

Curator and writer Sandra S. Phillips reminds us: “Photography was invented during a period of enormous technological advancement that marked the real beginning of the modern world. The Industrial Revolution brought about fundamental and radical changes in society; suddenly machines were able to do things that human beings could not, or machines could do the work of many more quickly.”[9] Similarly, the geopolitical ruptures of the late twentieth century brought out fundamental changes to the sociopolitical fabric of our world. The end of the Cold War saw neoliberalism seemingly prevail as the global political ethos. As Francis Fukuyama (in)famously wrote in his article The End of History? (1989), “The triumph of the West, of the Western idea, is evident first of all in the total exhaustion of viable systemic alternatives to Western liberalism.”[10] The redistribution of geopolitical hegemony in the late 1980s and early 1990s also saw a concurrent ascendancy of digital technologies over their analog antecedents. Bookending the conceptualization of digital imaging technology in the early 1960s, researchers in Southern and Northern California connected through a proto-internet framework in 1969, and the foundation of our contemporary digital world was set. [11] The 1990s and early 2000s saw the convergence of digital images and the internet, as graphical user interfaces, broadband connections, and ever more affordable personal computers flooded the consumer sphere.

Digital technology was, on the surface, perfectly suited to accentuate the value systems, or at least the superficial signifiers (for instance “fast,” “new”), of Western life. A rapid series of firsts emerged during this period. One of the first exhibitions on digital photography, Digital Photography: Captured Images, Volatile Memory, New Montage opened at SF Camerawork in San Francisco in 1988.[12] Photoshop was released for the Apple Macintosh in 1990 and for the Windows PC in 1993. The first lay user of a commercially made digital camera is reputed to have been Lucien Samaha, who in the late 1980s and early 1990s was a student at Rochester Institute of Technology’s School of Photographic Arts and Sciences. He won a scholarship that included a position in Eastman Kodak Company’s Professional Photography Division. In 1990, by his own account, he was one of the first “to use Kodak’s DCS 100 Professional Digital Camera System outside the factory floor.”[13] He recalls, “The camera, a Nikon F3 with a Kodak digital back, was tethered to a DSU (Digital Storage Unit) with (by 1990 standards), an astounding 200 Mb Winchester drive, a keyboard and a small monochrome monitor.”[14] Professional- and consumer-grade digital cameras continued to be released throughout the 1990s, and the first camera-equipped phones became available toward the end of the decade.[15]

The mid- to late 2000s untethered access to the internet from domestic and commercial/institutional spaces (internet cafés, libraries, universities) through the widespread adoption of modern smartphones. Coupled with advances in cellular telecommunication (through 3G, 4G, and now 5G networks), users of mobile devices became able to transmit content at ever-increasing speeds and feed an ever-greater amount of data back into the cybersphere. As the New York Times reported in 2015, the number of photos uploaded to the internet “has nearly tripled since 2010 and is projected to grow to 1.3 trillion by 2017. The rapid proliferation of smart phones is mostly to blame. Seventy-five percent of all photos are now taken with some kind of phone, up from 40 percent in 2010.”[16] Those images quickly became incorporated into large-scale image databases that would become training fodder for machine learning and artificial intelligence research.[17]Scholar Ruha Benjamin writes, “Photography was developed as a tool to capture visually and classify human difference; it helped to construct and solidify existing technologies, namely the ideas of race and assertions of empire, which required visual evidence of stratified difference.”[18] What then happens when those technologies are enhanced through classification algorithms and deployed as autonomous systems?[19]

The sophistication of modern machine learning and AI algorithms allows for the reconfiguration of imagery and its associated metadata not only into aesthetic, marketing, or consumer products, but as tools of the military-industrial complex.[20] In addition, they serve to reinforce or perpetuate the biases of the creators of those systems. Safiya Noble, writing in Algorithms of Oppression: How Search Engines Reinforce Racism (2018) expands: “Part of the challenge of understanding algorithmic oppression is to understand that mathematical formulations to drive automated decisions are made by human beings. While we often think of terms such as ‘big data’ and ‘algorithms’ as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors.”[21] Satellites are now able to capture and deploy location data for the purposes of spycraft and turn-by-turn navigation. Facial recognition software, whose algorithms depend heavily on multitudes and magnitudes of images, are deployed by security agencies of global nations but also commodified into face filters on social media. The personalization of media content is ultimately driven by the variance and quantity of data points users provide either actively (through intentional connections to websites, apps, and servers) and/or passively (through cookies, malware, open networks, and so on). This digital oversharing allows for easily curated retail or media experiences, but also for the hyper-targeting of individuals (notably swing voters) by political campaigns.

AI-generated images are also introducing new challenges in the realms of content moderation and image verification. As these images blur the lines between the real and the artificial, determining their authenticity becomes increasingly complex. Previously, content moderation was about filtering through content to determine its appropriateness or source. Now, with the rise of advanced AI imagery, moderators face the added task of distinguishing between human-created and machine-generated content. Image verification, which already faced challenges due to tools like Photoshop and the emergence of deepfakes,[22] is further complicated by AI-generated images. These images can replicate reality with high precision but do not have a clear origin, making the process of verification even more daunting. The realms of copyright and plagiarism have entered uncharted territory as well. Images produced by AI models, driven by vast training datasets, raise questions about originality and ownership. If an AI uses copyrighted images in its training set and then generates a “new” image, who holds the rights? Moreover, if these datasets contain uncredited works, the risk of institutionalizing plagiarism becomes real.

Digital technology makes its own internal labor largely invisible and inaudible. There is no whirring of tapes, nor (more recently) spinning of data drives. Conversely, the hyper-visibility of the conditions of labor necessary to manufacture electronic devices (mining of raw materials, supply chains and their requisite labor of factories, retail outlets, delivery services) has ironically been made possible through the instantaneous nature of digital communication. In addition to the complications of the sociopolitical sphere, increasingly power-hungry technological systems demand more raw materials and energy. Jussi Parikka writes: “The iDevice is enabled by dubious labor practices, including child labor in the mines of Congo; the appalling working conditions, which lead to a number of suicides, in the Foxconn factories in China; and the planned obsolescence designed into the product, which also contributes to its weighty share of electronic waste problems.”[23] In navigating the interplay of labor and resources within the digital realm, it becomes clear that beneath the exteriors of our devices lies a tangled web of human and environmental consequences.

Artists and activists have been deploying strategies to subvert and repurpose mass communications (and their tools of creation) since the beginnings of those practices and mediums. Culture jamming—the intrusion of pirate signals into mass-media broadcasts—occurred in a few notable cases in the 1980s and 1990s (as when a faux Max Headroom hijacked two Chicago television stations for a brief moment in 1987)[24] and has found a new dimension in the digital age. So-called subvertising,[25] the disruption of marketing and advertising materials for the aims of subverting their consumeristic and/or political impulses, has thrived in the digital realm as well, expressing itself through meme hacks and other forms. In recent decades, performance and social-practice artists have updated the analog methods of the likes of the Merry Pranksters and Guerrilla Girls, playing with various notions of the viral moment, the meme, and other internet semiotic expressions to articulate their projects.

The precarious nature of instant connection and mediation (and its subsequent remediation, reinterpretation, and remixing) have played out to various extents in the last few decades. The Iraq War of the early 1990s was the first war broadcast in real time over television, and subsequent conflicts have been transmitted over the medium to ever greater degrees. The entry of the internet into the media landscape during the 1990s and the internet’s suffocation of traditional news and media outlets has lent new urgency to discussions of truth, fact, veracity, and access. Intrusions into the military-industrial-media triangulation by hacktivists and groups such as WikiLeaks have challenged conventions of jurisprudence and journalistic intent. The presidential campaigns of the past two decades were all reliant on internet-based outreach and organization, as well as on electioneering opportunities afforded by social media. Additional recent reference points that illustrate the potency of digital (imaging) technology in moments of protest and political upheaval include the role of social media and file-sharing software in Occupy Wall Street, the Arab Spring, and Black Lives Matter.

As with digital technology, AI systems and the infrastructure/labor networks that support them also take a toll on ecologies and economies. Kate Crawford writes: “The lifecycle of an AI system from birth to death has many fractal supply chains: forms of exploitation of human labor and natural resources and massive concentrations of corporate and geopolitical power. And all along the chain, a continual, large-scale consumption of energy keeps the cycle going.”[26] Artists have begun, with ever more intensity, exploring the complications and possibilities of AI and digital images. Crawford and Trevor Paglen’s ImageNet Roulette(2019) made the ImageNet[27] database the subject of an installation, highlighting the problematic and stereotypical classifications imposed by AI classifiers. Artists such as Hito Steyerl and Zach Blas have also delved into these themes, using digital media to critique surveillance, data privacy, and the sociopolitical implications of AI. Steyerl’s work often reflects on the role of images in the age of digital reproduction and artificial intelligence, while Blas confronts issues of biometrics and identity in the digital realm.[28]

Afterimage

The past few years have seen a proliferation of AI imaging tools, with AI features infusing seemingly every aspect of image production. One of particular note, and perhaps symbolizing widespread adoption of AI image systems as much as anything else, was the “generative fill” tool’s introduction in early 2023 into beta versions of Photoshop. Generative fill allows users to add, extend, or replace elements in an existing image by simply entering text into a prompt window.[29] The tool is powered by Firefly, a generative AI system proprietary to Adobe that also extends to other applications in Adobe’s Creative Cloud suite of applications used for graphic design, video editing, and more. Integration of generative fill into Photoshop extends and complicates the possibilities of the “digital darkroom” while also potentially undercutting the business models of similar applications such as Midjourney. With Photoshop a staple of image editing workflows, and Photoshop (as a verb) a synonym for image editing, generative fill could potentially break generative AI into the mainstream of image editing—signifying not necessarily the end of photo editing, but rather a paradigm shift within the digital darkroom. Photo editing tools are no longer passive instruments; they now have the capability to suggest and create.

Refik Anadol, an artist working at the forefront of this intersection, embodies an essence of the modern photographic and image-making landscape. Joanna Zylinska comments, “Anadol’s work . . . [foregrounds] the impossibility of the human seeing it all[;] it points to the fact that images now come to us principally in flows to be experienced, rather than as single-frame pictures to be decoded.”[30] These flows of images not only come at us in algorithmically curated feeds on social media, but also serve as training corpuses for yet more new images. Anadol’s work in Digital Capture utilizes artificial intelligence to transmute the Keystone-Mast Collectionfrom the California Museum of Photography.[32] This collection, consisting of approximately 250,000 stereoscopic glass-plate negatives and 100,000 prints, is a vast archive of global history from the late nineteenth to the mid-twentieth century. Anadol and his team used proprietary AI algorithms to transform these historic images into a digital projection flow of new images, some abstract and some representational. Figures, forms, and landscapes emerge, then are once again subsumed. This flow brackets the history of photo technology, from the earliest glass plates to images generated by artificial intelligence and machine learning. It is also a Rorschach for the promises, pitfalls, and perils of AI and image making. Artworks utilizing AI (and the related discipline of machine learning) are arguably permeated by the broad constellation of military-industrial development, much of which has deep ties to California.[33]

Digital Capture walks up to the doorstep of AI, but does not fully cross that threshold. That subject matter is deserving of another project entirely.[34] Much in the way that Digital Capture serves as an exploration of the image-based pixel world that emerged out of California’s Cold War–era space-race labs, it also serves as a prologue for what is to come. It’s easy, at least for this author, to imagine us at an inflection point in the history of technology that is at least as significant as the one we encountered at the dawn of the digital age and the internet. Critical inquiry, artistic intervention, and social activism will play vital roles shaping and negotiating how emergent AI systems are deployed in daily life. They will be necessary to counteract impulses that would otherwise make market forces the sole prime movers of the technology’s future development and integration. There is no doubt that the future will be here before we realize it. What remains to be seen is how much of this future we will ourselves create, as opposed to be swept along into.

Notes

[1] https://www.pewresearch.org/internet/2021/09/01/the-internet-and-the-pandemic/.

[2] Netta Iivari, Sumita Sharma, and Leena Ventä-Olkkonen, “Digital Transformation of Everyday Life: How COVID-19 Pandemic Transformed the Basic Education of the Young Generation and Why Information Management Research Should Care,” International Journal of Information Management 55 (December 2020): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7320701/.

[3] The title of this essay nods to and inverts artist and author Ilya Kabakov’s essay “Not Everyone Will Be Taken Into the Future” (1983) which likens painter Kazimir Malevich to a headmaster choosing students for summer camp to illustrate the selective nature of which artists’ works endure into the future. For more see Margarita Tupitsyn, Malevich and Film (New Haven, CT: Yale University Press, 2002) 3-7.

[4] Wolfgang Ernst, “Method and Machine versus History and Narrative of Media,” in Media Archaeology: Approaches, Applications, and Implications, ed. Erkki Huhtamo and Jussi Parikka (Berkeley: University of California Press, 2011), 239.

[5] Parikka, Jussi, and Garnet Hertz. “Archaeologies of Media Art.” CTheory, edited by Arthur and Marilouise Kroker, 1 Apr. 2010, www.ctheory.net/articles.aspx?id=631

[6] Jussi Parikka and Erkki Huhtamo, “An Archeology of Media Archeology,” in Media Archaeology: Approaches, Applications, and Implications, ed. Erkki Huhtamo and Jussi Parikka (Berkeley: University of California Press, 2011), 3.

[7] Noah Wardrip-Fruin, “Digital Media Archeology: Interpreting Computational Processes,” in Media Archaeology, 302.

[8] Joanna Zylinska, The Perception Machine: Our Photographic Future between the Eye and AI (Cambridge, MA: MIT Press, 2023), 2.

[9] Sandra S. Phillips, “Exposing Ourselves: Photography and the Covert,” in Covert Operations: Investigating the Known Unknowns, ed. Claire C. Carter (Santa Fe, NM: Radius Books; Scottsdale, AZ: Scottsdale Museum of Contemporary Art, 2014), 27.

[10] Fukuyama, Francis. “The End of History?” The National Interest, no. 16, Summer 1989, 3.

[11] The Advanced Research Projects Agency Network (ARPANET), established in 1969, laid the technical foundation for the modern internet. The Advanced Research Projects Agency became the Defense Advanced Research Projects Agency (DARPA), and is the research and development wing of the US Department of Defense. Its primary responsibility is the development of emerging technologies for the military.

[12] The accompanying catalogue was Marnie Gillett and Jim Pomeroy, Digital Photography: Captured Images, Volatile Memory, New Montage (San Francisco: SF Camerawork, 1988).

[13] See https://lucien-samaha.squarespace.com/projects-1.

[14] Lucien Samaha, “Kodak and the Birth of the Digital Camera,” The Heavy Collective, November 5, 2013, https://web.archive.org/web/20220814034237/http://theheavycollective.com/2013/11/05/lucien-samaha-kodak-the-birth-of-the-digital/

[15] The first commercial camera phone, released in Japan in 1999, was the Kyocera VP-210. See https://collection.sciencemuseumgroup.org.uk/objects/co523555/kyocera-visualphone-vp210-mobile-video-phone-1999-mobile-telephone

[16] Stephen Heyman, “Photos, Photos Everywhere,” New York Times, July 23, 2015, https://www.nytimes.com/2015/07/23/arts/international/photos-photos-everywhere.html.

[17] Such databases include ImageNet (2009), Microsoft’s COCO (Common Objects in Context) Dataset (2014), and Google’s Open Images Dataset (2016), among many others.

[18] Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (Cambridge, UK: Polity, 2019), 68.

[19] See Cathy O’Neil, Weapons of Math Destruction (New York, NY: Broadway Books, 2016), 15–31.

[20] Andirul Industrie, based in Costa Mesa, California, is an example of a recent AI-forward defense contractor. Anduril was founded in 2017 by Palmer Luckey, perhaps best known as the creator of the Oculus Rift VR gaming headset.

[21] Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018), 1–2.

[22] Deepfakes are photorealistic video manipulations of people made possible through machine learning and AI.

[23] Jussi Parikka, A Geology of Media (Minneapolis: University of Minnesota Press, 2015), 89.

[24] John Carpenter, “The Max Headroom Incident,” Chicago Tribune, November 23, 1987, https://www.chicagotribune.com/1987/11/23/the-max-headroom-incident/.

[25] See Naomi Klein, “Subvertising: Culture Jamming Reemerges on the Media Landscape,” Village Voice, May 8, 1997, 40.

[26] Kate Crawford, Atlas of AI (New Haven, CT: Yale University Press, 2021), 32.

[27] ImageNet, created by Dr. Fei-Fei Li in 2006 at Stanford University, is a database of more than fourteen million images (as of 2024) whose aim is to develop the ability of computer systems to develop recognition of objects within images. The project has faced criticism for issues related to dataset bias and image sourcing.

[28] See for instance the 2023 exhibition Hito Steyerl: This Is the Future at the Portland Art Museum, https://portlandartmuseum.org/event/hito-steyerl-this-is-the-future/, and Blas’s Facial Weaponization Suite (2012–14), https://zachblas.info/works/facial-weaponization-suite/.

[29] For more see https://www.adobe.com/products/photoshop/generative-fill.html.

[30] Zylinska, The Perception Machine, 53.

[32] For a web-browsable version of Keystone-Mast see https://calisphere.org/collections/11747/.

[33] Santa Clara, California–based chipmaker Nvidia is perhaps at the forefront of developing cutting-edge hardware that powers much of AI development, and thus development of art and AI. For more on its connections to government and military funding see https://www.nvidia.com/en-us/research/government/.

[34] Some threads of AI and photography were explored in the 2023 exhibition at UCR ARTS Every Day We Have to Invent the Reality of This World: AI Post Photography.