Collaborations between computer science and arts are nothing new. I can’t seem to discuss the context of this project without talking about my motivations, and those of my key collaborator – Kobi Hartley, a Computer Science student at Lancaster University. For a while now I’ve been a huge fan of generative music- music which is generated using algorithms. The success and cult followings of artists such as Aphex Twin and Autechre has fascinated me, and it seems well accepted that we can input ‘code’ into a machine and output ‘music’. In the summer of 2017 I had the opportunity to see Aphex Twin perform live – which is certainly a rare event. The music was accompanied by no less than 20 video projections, in the largest collection of screens I’ve ever seen. These projections managed to span the entire length and height of an aircraft hanger, as you can see from the video below, it was quite spectacular.
The visuals for this performance were provided by Weirdcore, a London based video artist who often collaborates with experimental musicians. Interestingly, just like the music, the visuals for this show are created and manipulated live, using a computer, along with several cameras and projectors. In a rare interview, Weirdcore stated ‘It’s all live generated stuff, lots of it is footage from the crowd, fed into my computer and manipulated in real time, with some 3D generated stuff too’ (Weirdcore in Bourton, 2017) Perhaps even more pertinent to my project, Weirdcore goes on to say:
When it works, it’s fantastic but if there is one thing that doesn’t go quite right, it will affect the rest of the show. It’s a bit like the difference between theatre and cinema. With theatre there’s all these things that could go wrong on stage, but when it works it’s magical. Whereas with cinema, you’re safe, you know exactly what you’re going to get.
Weirdcore in Bourton (2017).
This aspect of risk and liveness struck a chord with me, as a theatre maker. The ephemeral nature of performance is what sets it apart from many other art forms. Unlike a photograph, sculpture or painting, performance exists temporally, and the phenomenological experience that one goes through during a show, is one of the most rewarding aspects of performance. The collaboration between Aphex Twin and Weirdcore really highlights the live nature of computer coding. The video above does not fully illustrate the experience of being live in that hangar, it was an unforgettable experience. Being submerged into darkness, only moments later to be bombarded with 20 flashing images, on such a huge scale (and accompanied with music at deafening volume) provided a sensory experience that was exhilarating – and reminded me of Artaud’s Theatre of Cruelty (2013). The video content contained live footage of the audience, their faces were the morphed into demonic images, pop culture characters, and even into Aphex himself. Even if it were to be performed again, it would never be exactly the same, and this is in part, due to its live nature. The collaboration between these two artists illustrates just how computer coding can be used to create ‘Art’ in varying forms. While there are an abundance of examples of computer generated music, video and visual art, I suddenly realised that I was struggling to give an example of computer generated theatre. This led Kobi and I to one of our main (and very broad) enquiries ‘What does computer generated performance look like?’.
When researching for this project I came across many examples computer programmers and performance artists working together. To name a few pertinent examples: Blast Theory (UK), CREW (NL/UK), Prototype (UK), Laurie Anderson (USA). This list is by no means exhaustive, and this is a fraction of artists who are currently working with technology, but it does serve as a good body of practice where mine and Kobi’s collaboration could sit. Perhaps a more relevant and recent example of live coding (and arguably, ‘computer generated performance’) can be seen in the recent work by Medea Electronique. Their recent piece Echo and Narcissus is a digital opera, with libretto generated through live coding, and sung by a live performer. This intersection of live coding, mixed with human delivery resonates with the motivations and context for the collaboration between Kobi and myself.
and Narcissus</a> from <a href=”https://vimeo.com/medeaelectronique”>Medea
Electronique</a> on <a href=”https://vimeo.com”>Vimeo</a>
The idea of using computers to create a live performance is difficult for some, as it challenges the very idea and definition of ‘live’. Early on, when speaking to others about this project many people would say “but how can it be live, if its made from computers?”. It was these questions, and many other similar ones which led me to look at the work of Philip Auslander. Auslander (2002) wrote about chatbots in performance, arguing that their presence changed our perceptions of what is defined as live. As Auslander states:
chatterbots are not playback devices. Whereas audio and video players allow us to access performances carried out by other entities (i.e., the human beings on the recordings) at an earlier time, chatterbots are themselves performing entities that construct their performances at the same time as we witness them. (2002. p.20)
According to Auslander, chatbots do perform live, but their lack of corporeality means they are not alive in the same sense as a human performer. I wanted to exploit and emphasise these different forms of ‘live’ by having both a live performer, and a live text generator on stage at the same time. I wanted the audience to see the liveness of the computer programming – which is why we made sure the process of copy/paste was shown live. Above all, it was this article by Auslander that made me want to work with text. Since Auslander wrote his article there’s been a incredible increase in the use of smartphones and tablets. With an abundance of online messaging services,we are now much more accustomed to reading text from a screen, albeit, this text is usually written from one person to another. With this in mind, combined with much more sophisticated AI technology (making chatbots seem even more realistic) it is potentially more difficult to distinguish between a live human, and a live machine when looking at text. It was this grey area of liveness, and this convergence of human and machine that I wanted to explore in this collaboration.
In my lecture demo I provided a quote from Steve Dixon (2007) about the embryonic level of technology in the arts. Of course, things have developed since 2007, but many still hold a similar view. Artist, and writer James Bridle argues of a ‘a weak technological literacy in the arts’ and goes on to argue that this is ‘representative of a far wider critical and popular failure to engage fully with technology in its construction, operation and affect.’ (Bridle, 2013). Indeed, Bridle’s work as an artist was particularly influential for this project, but his views on technological illiteracy are what confirmed that this project had to be an interdisciplinary collaboration (with someone more technically literate than myself). Even since Bridle’s article in 2013, there have been significant developments which see the presence of technology in the arts, I hope the examples I’ve mentioned in this post illustrate some of these, and start to argue the antithesis to Bridle’s argument. However, I don’t wholeheartedly disagree with him, Bridles argument isn’t as pessimistic as it appears, and he actually works to promote and celebrate the use of technology in the arts. Perhaps more pertinent is where Bridle (2013) mentions the lack of ‘Construction’ and ‘Operation’ of technology in the arts. It was this argument which formed the basis for this collaboration. I wanted to be sure that this wasn’t simply an artist asking for the services of a computer programmer. It was set to be a collaboration, where both parties would input into the creation of the work, and in turn would learn about each others discipline.
For the most part, Kobi’s motivations, and the context in which he agreed to this collaboration are very similar to my own, he was even there at the Aphex Twin concert. However, Kobi’s participation in this project is largely due to the attitudes towards interdisciplinary work. Moran (2010) discusses some of benefits and limitations of interdisciplinary research, and how negative attitudes has arisen towards interdisciplinary work. On his course at Lancaster, there is no creative or artistic module. As you can see from his initial email ,Kobi wanted to explore ‘elements of computer science that I don’t get to do within the bounds of my degree’. If we are going to combat this technological illiteracy in the arts (according to Bridle, 2013) we have to accept and encourage interdisciplinary collaborations. This project was far from perfect, and it has a long way to go before a practical iteration is ready for public showing. As it currently stands our program for creating text is clunky and void of fancy design. However, we set out to research and develop, to explore and experiment, and have learnt an awful lot in the process. As you’ll be able to read in my ‘Context of Future Work’ blog, some of the most important learning points from this project are concerned with our process, and how this can be used for similar collaborations in the future.
Even if the work we’ve created so far is a little on the messy side, there is still huge value in what we have achieved, which is perhaps best summarised in Moran’s conclusion to ‘Interdisciplinarity’:
‘It could be argued that, because they are relatively new and exploratory, interdisciplinary ways of thinking have a tendency to be more disorganized and fragmentary than established forms of knowledge. But if a certain messiness goes with the territory of interdisciplinarity, this is also what makes that territory worth occupying.’ (Moran, 2010 p.180)