Titles created by an artificially intelligent machine-learning algorithm. Images created by me.


All images from the series Computers Can’t Jump, 2019 - present

In the mid-20th century, technology’s novel apparatus was the television. A possible precursor to a new and exciting world, it was followed by the computer. Organised around various forms of language, these intelligent machines help us understand, label, classify and communicate with the world around us. Today, in the era of Artificial Intelligence (AI), there is a fear that machines will become so intelligent that they will independently operate with their own consciousness.

How can present-day machine intelligence influence artistic production? What kinds of images can evolve when we play with technology analogous to the eye (the camera) and the brain? I created a Recurrent Neural Network (RNN) - a machine-learning algorithm – to examine these questions. RNNs evolved from neural networks that were inspired by the interconnected neurons in animal brains. Contemporary RNNs automatically improve through experience and do not require a programmer's constant supervision.

RNNs can generate their own linguistic outputs from datasets. The BBC offers one such dataset; 16,000+ sound effects online for public use. Each sound effect is arbitrarily given a prosaic, descriptive title. ‘Shouts of encouragement at a wrestling match’, ‘Large bird taking off, three attempts’, and ‘Electric monotony’ are a few examples. This dataset provides me with the raw material to play with how image and language can signify each other under the influence of a neural-like AI system.

I feed the RNN the BBC’s descriptive text. It trawls through the information, and by learning patterns within the text is able to create its own outputs with surprising results. ‘River pig loading’, ‘Applause from piece of space’ and ‘Occasional village bra’, are a handful of its creations. I then playfully create images that visually articulate an interpretation of each output’s linguistic meaning. In other words, I move from sound, to text, to AI, to text, to image. By re-appropriating machine learning programs to generate titles of artworks and creating new visual languages from their output, the machine becomes an active collaborator and playmate.