Your support keeps us publishing. Follow this link to subscribe to our print magazine.

The Work of Art in the Age of Artificial Production

The rise of AI image makers like DALL-E threaten more than endless viral images. We have to start imagining how AI technologies can remain in service to culture workers, rather than vice versa.

A DALL-E image created with the prompt: 'Karl Marx taking a selfie in a night club which was formerly a linen warehouse before gentrification.' (Will Jennings)

If you’re on social media, you will likely have seen images flowing down your feed. Images which seem realistic, but unexpectedly bizarre. Kermit the Frog posing in Munch’s Scream, the Xenomorph from Alien in court, an old lady building a nest on a cliff face, or disciples replaced by Minions in the Last Supper. Artificial Intelligence image makers such as DALL-E 2, Midjourney, and Stable Diffusion have been churning out an exponential number of uncanny and unimaginable images into social media. Certain characters recur—Walter White and Shrek are regular protagonists within unexpected scenes—while  there’s a definite aesthetic turn towards fantasy, sci-fi, and comedic juxtaposition.

The generators work by a user inputting a text prompt, which could be a descriptive sentence and include keywords referencing a style, format, or artist reference. The AI then scans across datasets of billions of existing images mined from the internet and not always with specific permissions from original creators. These billions of images—whether photographic, reproduction of a classic work of art, illustration, whatever—and their meta-information are then interpreted by the algorithm in response to the words of the prompt to create an entirely new image. For example: ‘Karl Marx taking a selfie in a night club which was formerly a linen warehouse before gentrification.’

They don’t have to be fantastical or absurd. Even the most mundane images such as an average person in an average town carry a curiosity. Others may seek utopian solutions: @betterstreetsai on Twitter reimagines dull American roads by visually transforming them into tramlined, pedestrianised eco-corridors, while others like the well-publicised recent last selfies taken on earth may be of a more dystopian bent—I am unsure which camp ‘Jeremy Corbyn as Mad Max’ falls into.

Surprise appearances from Kermit, Shrek, or Marx are fun, but very quickly become extremely boring. The ease of their creation, and the fact they are churned out without purpose or idea, leaves them as gimmicks quickly losing their quirk. But the underlying technology is going nowhere and will ultimately powerfully affect the cultural sector. Tribune has already discussed how AI could impact the future of labour and presented ideas of how it can be considered in relation to the class struggle, but how can we think of the future role of AI within culture?

Anybody, even those with little ability in representational painting or drawing, can deploy AI generators to summon fleeting creative ideas, but any user should be aware that the billions of dataset images their images are formed from were all created by makers, artists, illustrators, photographers, and other human creators. Is it plagiarism? To trace another’s work, mechanically or digitally reproduce it without license, is evidently IP theft, but in an industry built upon iterations and evolutions of ideas, plagiarism is a hard thing to prove—even an artist like Richard Prince ‘appropriates’ others’ work, as with his rephotographed Malboro cigarette adverts originally shot by Sam Abell, and rarely loses in court.

Prompt: ‘A cover of Tribune Magazine featuring Prince Charles.’

Traditional plagiarism law and understanding falls flat with AI generators which don’t so much montage existing images so much as learn from them to create an entirely bespoke piece. However, styles can be copied, and a visual language which might take a lifetime to hone can be implied with just a few keystrokes of a prompt: ‘in in the style of Van Gogh’ can be copied, and those images which appear countless times—such as any Van Gogh painting—are liable to be reproduced fairly accurately due to a depth of near-identical data for the AI to train on.

In his 1935 text The Work of Art in the Age of Mechanical Reproduction, Walter Benjamin writes that ‘technological reproduction is more independent of the original than is manual reproduction. For example, in photography it can bring out aspects of the original that are accessible only to the lens (which is adjustable and can easily change viewpoint) but not to the human eye; or it can use certain processes, such as enlargement or slow motion, to record images which escape natural optics altogether.’ Today, the overwhelming majority of images we encounter in our lives are reproductions, whether poster, TV advert, or images in a book, but with AI we move one step beyond. With immediate and unlimited uniquely created images that go beyond what the eye couldn’t see, and into places we may not have imagined, we are developing works of art in the age of artificial production.

In his text, Benjamin goes on to root authenticity at the core of what a work of art is. This is, he says, ‘all that is transmissible in it from its origin on, ranging from its physical duration to the historical testimony relating to it,’ with the process of reproduction decaying that authenticity and the aura of the work, qualities connected to a work’s uniqueness. AI images are unique, though it is a uniqueness nullified by its ease of creation and ubiquity—what value can a unique image hold if anybody can use their phone to make a similar unique image in just a few seconds?

The US Copyright Office will currently not copyright AI generated artworks, stating that ‘original works of authorship’ must be created ‘by a human being’. In the EU and UK things are less sure, and presently untested. An artist may use an AI generator to create a source image, develop it in Photoshop, then use it as part of a montage, or draw over it, and so just how much the final artwork is AI generated or one of a number of ingredients of an artistic whole will be a question. Is it any different to Duchamp turning a simple urinal into Fountain by turning it over and scribbling ‘R. MUTT 1917’ on it?

When Karen Cheng created an AI cover for Cosmopolitan magazine, she posted a short video of her process clearly showing a human-led creative path towards the final image. She stated that ‘…the more I use #dalle2, the less I see this as a replacement for humans, and the more I see it as tool for humans to use—an instrument to play.’

Many creatives, from illustrators to horror film prosthetic designers, enjoy that play, and welcome an AI to interject into their creative path and throw new ideas their way which may be developed in a new project. But not all are so welcoming. Stock photography suppliers seem anxious about what is around the technological corner, stating their defence of ‘real-life’ photographers. Getty Images, Shutterstock, and more have recently banned AI generated images from their platforms, though it’s not really been explained how, as these generators develop, they will even be able to tell what was originally a photograph, and what was AI-generated. The idea of photography representing some kind of truth has long been questioned, whether it’s the suggested staging of Roger Fenton’s 1855 Crimean War photograph In The Valley of Death or Instagram influencers editing swimsuit selfies to push an unrealistic body image.

Prompt: ‘A version of Buckingham Palace in a brutalist architectural style.’

If all photographs in any given image library have in some way been manipulated by hand, Photoshop, or filter, it’s not explained what boundaries have been set for what constitutes a non-AI image, let alone how it may be proven. DALL-E has already introduced ‘outpainting’ whereby AI can be used to extend an image beyond its existing edges—it’s already imagined the cluttered kitchen just out of frame in Vermeer’s Girl With a Pearl Earring—and to change focus and depth of field on existent photographs, so such AI bans will likely become futile to enforce.

The culture sector is already suffering under the Tories. Despite bringing the economy over £32 billion in 2018, our government spend longer discussing fish, of which the UK had caught £1 billion in the same year. Might AI be a threat to the sector beyond the appreciation or public perception of creating a work of art? Illustrators may be at risk, reliant upon creating stock images for use by magazines and newspapers, publications that can now create a bespoke image in any style at the drop of a hat to illustrate articles. For an industry in which top celebrities—think Rankin, Damien Hirst, or Taylor Swift—may earn handsomely, the cash doesn’t trickle down to freelancers and invisible labourers. Technologies which remove the need for workers—from special effects artists to animators, illustrators to video game coders—are a huge threat to not only the sector but also the aura of a human hand at the root of one of the oldest human endeavours.

This isn’t restricted to the visual arts. Equity found that 65% of actors and performing arts workers are concerned about AI taking away employment opportunities, with 93% in audio fearful of AI replacing their skills in production, editing, engineering, and live events. Voice artists share concerns that they may be recorded then used unknowingly for future productions without them being present, adding existential identity theft to the concerns over loss of professional work.

During the early 1800s there were calls from hand-loom workers who lost income due to new loom technology for a tax ‘on machinery’, with a Leeds worker stating in 1835 that: ‘Their labour has been taken from them by the power-loom; their bread is taxed, their malt is taxed, their sugar is taxed. But the power-loom is not taxed.’ If one loom could see a factory owner reduce their workforce by ten humans, that’s both a huge saving for his pocket but also a huge and unsustainable financial loss to broader society. Even Bill Gates has come out in support of a robot tax, and with the rise of AI there will no doubt be calls from the Left and creative unions for strategies to compensate those whose work is in online datasets, being mined to create the very cultural artefacts intended to replace their labour.

Musical and visual artist Holly Herndon has recently created Spawning, a project to develop tools for artists to own and manage their AI training data, and has created a search engine so any creative can see if their original artwork is included in any datasets. Their name Spawning is a new term, ‘created to describe the act of creating entirely new art with an AI trained on older art,’ and Herndon hopes to develop a consensual system of referencing and making based on a commons approach. It is likely, however, that regulation might be the only counter to tech giants extracting value from the history of art and individuals’ work. A 2022 UK government report sets out some of the financial implications of businesses adopting AI, but doesn’t drill down into the culture sector specifically, and an Office for Artificial Intelligence has been formed. However, nobody expects the current minister in charge of culture, Michelle Donelan—the twelfth in a decade—to proactively think about the implications of an industry in which she has previously shown no interest.

Benjamin introduced concepts in which he set out how reproducible art may become ‘useful for the formulation of revolutionary demands in the politics of art,’ and so now is the time to start imagining how AI technologies can remain in service to the cultural sector, its workers, and progressive ideas, rather than become a tool of sweating value from others’ historic or ongoing work. There can surely be a system in which AI and humans work together towards a more equitable and democratic culture—and now that AI is being used to shape future economics, perhaps we need to intervene and create that system sooner rather than later, before we become the tools of AI.