CI Portfolio: Bibliogrpahy

Artaud, A. (2013). The Theatre and Its Double, Alma Books, London. Available from: ProQuest Ebook Central. [Last Accessed 14 May 2018].

Auslander, P., 2002. Live from cyberspace, or, I was sitting at my computer this guy appeared he thought I was a bot (Recording technologies). Paj-A Journal Of Performance And Art, (70), pp.16–21.

Bourton, L (2017). “It’s a psychological overload”: Weirdcore on creating Aphex Twin’s live visuals. [Online] Available at: ://www.itsnicethat.com/features/weirdcore-aphex-twin-field-day-050617-miscellaneous [Last Accessed 14 May 2018].

Bridle, J. 2013. The New Aesthetic and it’s Politics. [ONLINE] Available at: http://booktwo.org/notebook/new-aesthetic-politics/. [Last Accessed 12 May 2018].

Dixon, S., 2007. Digital performance : a history of new media in theater, dance, performance art, and installation, Cambridge, Mass. ; London: MIT.

Moran, J (2010). Interdisciplinarity – The New Critical Idiom. London/New York: Routledge.

Reed, M.S., 2016. The research impact handbook. Aberdeenshire: Fast Track Impact

 

 

Advertisements

CI Portfolio: Future Work/ Action Planning

Where do we go from here?

We have many hopes and ambitions for the future of this project. Before sharing the work any further, Kobi and I are really looking develop the practical element of the project. As it currently stands we have a short section, which delivers political speeches, made up from many different world leaders and many different viewpoints. We want to add the views of the ‘everyman’ to the mix. We are currently looking at using the reddit and Facebook APIs to gather viewpoints from other internet users. Reddit is particularly useful, given its function as a discussion board, views on a wide range of topics, including the topics we are already looking at in the performance (LGBT rights, Global warming etc.) are readily available. The viewpoints collected from these sites will provide further depth to the debates. It will no longer be made up of just political leader’s views, but also views of ‘normal citizens’ which I’m hoping the audience will be able to relate to more.


As well as expanding our content, we are looking to develop the technology further. At a base level, the program is working to jumble up text, but we want to push it further. Using the above sites (Reddit and Facebook) we want to be able to populate the ‘Input text’ automatically – which will avoid the clunky copy/paste aspect. We do however want this text population to be visible, in such a way as to flaunt the liveness of the Markov Chain. We are also looking to develop the design of the program, as it currently looks pretty basic.

 

Funding opportunities look to be limited, and I’d feel a lot more work is required if we are to apply for Arts Council funding. We are lucky to have some in kind support from the University of Salford (who have agreed to provide rehearsal space, subject to availability providing a full risk assessment is provided). Kobi will be using resources from Lancaster University to help develop the technology, the software and equipment provided to him as an undergraduate is more than sufficient for our needs. The UK Science and Technology Facilities Council is probably the most likely source of funding for us. Given our use of computational technology, and that any performance would be a public facing event, we are potentially eligible to apply for a ‘Public Engagement Sparks Award’ of up to £15,000. Previous projects funded by this award have included Art Exhibitions, Film/Photography projects and even previous theatrical endeavours have been awarded.

Providing we get these developments done, we will be in a position to share some work. It seems the most appropriate local event to share this work at would be the Manchester Science Festival. The festival describes itself as a place for innovative, surprising and meaningful experiences where people of all ages can ignite their curiosity in science’ (Manchester Science Festival, 2018). The festival has a wide and varied program, from lectures and seminars, theatrical performances and musical gigs. Although we are too late for this year, next year is a possibility and one of our guest lectures has a hand on organizing the festival – so contact has already been made for future networking opportunities.

Creative Producer Rob Young told me to ‘aim big, don’t sell yourself short, and act a little bit like a conman’ when it comes to marketing this performance. By this he meant that we shouldn’t be afraid to contact mainstream media sources. Under Rob’s advice I’ll be getting in touch with WIRED magazine. WIRED have a culture section which focuses on projects similar to this, although it does seem to be lacking coverage on live performances, so this may be quite a unique project for them. Similarly I’ll also be getting in touch with BBC Focus magazine. While these publications may seem ambitious, Rob assured me it was worth trying. Many of the theatre publications may see this as just another experimental performance, but Rob pointed out that these publications would receive a lot less submissions about the arts, so this project may just stand out for them.

 

In his lecture with us, Umran Ali taught us that interdisciplinary projects have more to offer than just the work they produce, and that process can be just as valuable. Kobi and I want to share our process, and hear about other approaches to collaborations of this nature. As such, we have confirmed attendance to the ‘Performance and Digital Technology Gathering’ in August this year. The gathering consists of performances, and discussions from theatre makers, computational artists and alike. As it’s the first year the gathering has run, we are hoping to be able to present on our process and hear from others about how they work together. The event will also be discussing funding options for projects like this, so I’m hoping to become more informed about potential funding sources.

Action Points:

 

  • Confirm and book rehearsal space for further practical work.
  • Amend the technology and include Reddit/Facebook APIs
  • Complete a rehearsal risk assessment to enable in kind support from the University of Salford
  • Document future rehearsals and put together a press release for WIRED UK and BBC Focus magazine
  • Email Andy Miah to arrange a meeting to discuss Manchester Science Festival, and routes to entry for 2019
  • Document our process in a more formal way (e.g. flow charts, presentation or written document)
  • Attend the Performance and Digital Technology Gathering in Gateshead on 3rd/4th of August 2018.
  • Make a formal enquiry with the Science and Technology Facilities Council to seek advice on applying for a Public Engagement Sparks Award.

CI Portfolio: Process Documentation

As you’ve read in the context section, I already had some preconceptions before entering this collaboration. Initially Kobi and I spoke over email, which was great for understanding our initial interests in the collaboration.

Screen Shot 2018-05-14 at 21.03.55

 

Under the advice of Richard Talbot, and with some resources from Mark Reed (2016), I decided to create an initial engagement plan to give some direction to our first meeting (Click here to see it).  One of the main outputs of this initial session was a mind map, in which we wrote down some of initial enquiries. Although this seems a fairly minor session to highlight, it was actually rather significant. It was in this session that we both realised that this project had no expected ‘practical output’. This meant, that from this session, we decided to treat this collaboration as a Research and Development project.

Screen Shot 2018-04-19 at 21.49.19

 

For reasons I’ve explained in my context section we decided to focus on working with text. Our early experimentation with text from Hamlet can be seen here. As I say in the post, the reimagined Hamlet speech was amusing, but it wasn’t as captivating as I thought it would be. Simply re-performing Hamlet wasn’t enough, the text could communicate more. We decided to to use a completely different kind of text, we input a Donald Trump speech into chain. The results seemed a little more amusing, and this experiment would actual end up forming the basis of the piece, not that we knew that at the time.

This session also informed some of our design philosophy. The green background and yellow text was not ideal, it was difficult to read, so Kobi took to redesigning it (see the finished program here). We also realised that the Markov chain was only operational when Kobi was around. This is when we decided to use GitHub – a file sharing site for coders. It meant we could upload the code and iterations of the program, but was also a good place for me to upload scripts. It was one of our most valuable tools as it enabled us to work on the same files remotely.
Screen Shot 2018-05-14 at 21.49.21

While the program was getting designed we held another workshop where we played around with tweets and other types of text. This workshop had a much more political theme to it (the workshop plan is here). We were playing with a question/answer format – and in this session we had a eureka moment. Instead of simply putting one politician’s speech into the Markov chain, why not put several? We planned another workshop (here) and played with this idea. The results were fantastic, a mash-up of different viewpoints packaged in a single speech. You can see some results on ‘Global Warming’ below (it was made up of speeches from Merkel, Trump and May).

Screen Shot 2018-05-14 at 21.58.02After this workshop, we felt we really had some material worth working with. We started filming some early examples of performance, and I took to planning some scenographic choices which seemed to fit with our digital and political aesthetic (see here).

After this out time in Research and Development came to end. We seemed to have the start of a script, the start of a form to play with, a much more refined computer program that we started with, and even a basic idea for set. The R&D may be over, but you can see our future development ideas in my next post.

DISCLAIMER: Below I’ve left a link to a video, which was shown in the lecture demo for this project. It’s an example of some practical (in a very early and rough form). In the interest of self plagiarism, I have no intention of being assessed on this video again, and do not want to ‘officially’ include it in this post as part of my ‘process documentation’. I just thought it would be useful for anyone reading to see an example in practice – and may help to illustrate some of the above: 

CI Portfolio: Context

Collaborations between computer science and arts are nothing new. I can’t seem to discuss the context of this project without talking about my motivations, and those of my key collaborator – Kobi Hartley, a Computer Science student at Lancaster University. For a while now I’ve been a huge fan of generative music- music which is generated using algorithms. The success and cult followings of artists such as Aphex Twin and Autechre has fascinated me, and it seems well accepted that we can input ‘code’ into a machine and output ‘music’. In the summer of 2017 I had the opportunity to see Aphex Twin perform live – which is certainly a rare event. The music was accompanied by no less than 20 video projections, in the largest collection of screens I’ve ever seen. These projections managed to span the entire length and height of an aircraft hanger, as you can see from the video below, it was quite spectacular.


The visuals for this performance were provided by Weirdcore, a London based video artist who often collaborates with experimental musicians. Interestingly, just like the music, the visuals for this show are created and manipulated live, using a computer, along with several cameras and projectors. In a rare interview, Weirdcore stated ‘It’s all live generated stuff, lots of it is footage from the crowd, fed into my computer and manipulated in real time, with some 3D generated stuff too’ (Weirdcore in Bourton, 2017) Perhaps even more pertinent to my project, Weirdcore goes on to say:

 

When it works, it’s fantastic but if there is one thing that doesn’t go quite right, it will affect the rest of the show. It’s a bit like the difference between theatre and cinema. With theatre there’s all these things that could go wrong on stage, but when it works it’s magical. Whereas with cinema, you’re safe, you know exactly what you’re going to get.
Weirdcore in Bourton (2017).

This aspect of risk and liveness struck a chord with me, as a theatre maker. The ephemeral nature of performance is what sets it apart from many other art forms. Unlike a photograph, sculpture or painting, performance exists temporally, and the phenomenological experience that one goes through during a show, is one of the most rewarding aspects of performance. The collaboration between Aphex Twin and Weirdcore really highlights the live nature of computer coding. The video above does not fully illustrate the experience of being live in that hangar, it was an unforgettable experience. Being submerged into darkness, only moments later to be bombarded with 20 flashing images, on such a huge scale (and accompanied with music at deafening volume) provided a sensory experience that was exhilarating – and reminded me of Artaud’s Theatre of Cruelty (2013). The video content contained live footage of the audience, their faces were the morphed into demonic images, pop culture characters, and even into Aphex himself.  Even if it were to be performed again, it would never be exactly the same, and this is in part, due to its live nature. The collaboration between these two artists illustrates just how computer coding can be used to create ‘Art’ in varying forms. While there are an abundance of examples of computer generated music, video and visual art, I suddenly realised that I was struggling to give an example of computer generated theatre. This led Kobi and I to one of our main (and very broad) enquiries ‘What does computer generated performance look like?’.

When researching for this project I came across many examples computer programmers and performance artists working together. To name a few pertinent examples: Blast Theory (UK), CREW (NL/UK), Prototype (UK), Laurie Anderson (USA). This list is by no means exhaustive, and this is a fraction of artists who are currently working with technology, but it does serve as a good body of practice where mine and Kobi’s collaboration could sit. Perhaps a more relevant and recent example of live coding (and arguably, ‘computer generated performance’) can be seen in the recent work by Medea Electronique. Their recent piece Echo and Narcissus is a digital opera, with libretto generated through live coding, and sung by a live performer. This intersection of live coding, mixed with human delivery resonates with the motivations and context for the collaboration between Kobi and myself.


<p><a href=”https://vimeo.com/266724941″>Echo and Narcissus</a> from <a href=”https://vimeo.com/medeaelectronique”>Medea Electronique</a> on <a href=”https://vimeo.com”>Vimeo</a&gt;.</p>

The idea of using computers to create a live performance is difficult for some, as it challenges the very idea and definition of ‘live’. Early on, when speaking to others about this project many people would say “but how can it be live, if its made from computers?”. It was these questions, and many other similar ones which led me to look at the work of Philip Auslander. Auslander (2002) wrote about chatbots in performance, arguing that their presence changed our perceptions of what is defined as live. As Auslander states:

 

chatterbots are not playback devices. Whereas audio and video players allow us to access performances carried out by other entities (i.e., the human beings on the recordings) at an earlier time, chatterbots are themselves performing entities that construct their performances at the same time as we witness them. (2002. p.20)

According to Auslander, chatbots do perform live, but their lack of corporeality means they are not alive in the same sense as a human performer. I wanted to exploit and emphasise these different forms of ‘live’ by having both a live performer, and a live text generator on stage at the same time. I wanted the audience to see the liveness of the computer programming – which is why we made sure the process of copy/paste was shown live. Above all, it was this article by Auslander that made me want to work with text. Since Auslander wrote his article there’s been a incredible increase in the use of smartphones and tablets. With an abundance of online messaging services,we are now much more accustomed to reading text from a screen, albeit, this text is usually written from one person to another. With this in mind, combined with much more sophisticated AI technology (making chatbots seem even more realistic)  it is potentially more difficult to distinguish between a live human, and a live machine when looking at text. It was this grey area of liveness, and this convergence of human and machine that I wanted to explore in this collaboration.

In my lecture demo I provided a quote from Steve Dixon (2007) about the embryonic level of technology in the arts. Of course, things have developed since 2007, but many still hold a similar view. Artist, and writer James Bridle argues of a ‘a weak technological literacy in the arts’ and goes on to argue that this is ‘representative of a far wider critical and popular failure to engage fully with technology in its construction, operation and affect.’ (Bridle, 2013). Indeed, Bridle’s work as an artist was particularly influential for this project, but his views on technological illiteracy are what confirmed that this project had to be an interdisciplinary collaboration (with someone more technically literate than myself). Even since Bridle’s article in 2013, there have been significant developments which see the presence of technology in the arts, I hope the examples I’ve mentioned in this post illustrate some of these, and start to argue the antithesis to Bridle’s argument. However, I don’t wholeheartedly disagree with him, Bridles argument isn’t as pessimistic as it appears, and he actually works to promote and celebrate the use of technology in the arts. Perhaps more pertinent is where Bridle (2013) mentions the lack of ‘Construction’ and ‘Operation’ of technology in the arts. It was this argument which formed the basis for this collaboration. I wanted to be sure that this wasn’t simply an artist asking for the services of a computer programmer. It was set to be a collaboration, where both parties would input into the creation of the work, and in turn would learn about each others discipline.  

For the most part, Kobi’s motivations, and the context in which he agreed to this collaboration are very similar to my own, he was even there at the Aphex Twin concert. However, Kobi’s participation in this project is largely due to the attitudes towards interdisciplinary work. Moran (2010) discusses some of benefits and limitations of interdisciplinary research, and how negative attitudes has arisen towards interdisciplinary work. On his course at Lancaster, there is no creative or artistic module. As you can see from his initial email ,Kobi wanted to explore ‘elements of computer science that I don’t get to do within the bounds of my degree’. If we are going to combat this technological illiteracy in the arts (according to Bridle, 2013) we have to accept and encourage interdisciplinary collaborations. This project was far from perfect, and it has a long way to go before a practical iteration is ready for public showing. As it currently stands our program for creating text is clunky and void of fancy design. However, we set out to research and develop, to explore and experiment, and have learnt an awful lot in the process. As you’ll be able to read in my ‘Context of Future Work’ blog, some of the most important learning points from this project are concerned with our process, and how this can be used for similar collaborations in the future.

Even if the work we’ve created so far is a little on the messy side, there is still huge value in what we have achieved, which is perhaps best summarised in Moran’s conclusion to ‘Interdisciplinarity’:

‘It could be argued that, because they are relatively new and exploratory, interdisciplinary ways of thinking have a tendency to be more disorganized and fragmentary than established forms of knowledge. But if a certain messiness goes with the territory of interdisciplinarity, this is also what makes that territory worth occupying.’ (Moran, 2010 p.180)

Aesthetic Choices

It felt like the time to address some aesthetic choices. As our rehearsals begin to develop into a performance it was frustrating us not knowing what it would look like. So I thought I’d try and have a look at some basic scenographic decisions.

To provide context, we are using computers to generate text for a live performance. At the moment, we are using speeches and statements from world political leaders on some widely debated topics (Climate Change, LGBT Rights, Nuclear Weapons, Immigration etc.). These political speeches were initially used just to test with, but as the computer began to generate text based on these speeches, they became an interesting enquiry. So we carried on.

So far, our workshops have used these political speeches as a focus, so I thought it made sense to follow this scenogrpahically as well. Initially I thought of a political debate/ press conference, however I think the political vibe wasn’t enough. The presence of technology is something I want to acknowledge through set as well. So I think a ‘tech launch’ press conference is another good idea for influence on the set. Think of it like when Apple launch a new Iphone, and they have a press conference, thats what I’m going for. After trying out a few ideas, the provisional set ideas are below:

 

 

(X = performer)

As you can see, lighting choices as very important. Technology firms seem to favour the coolest colours on the colour spectrum. It comes as know surprise that when we google the word ‘digital’ we are presented with steel and cool blue colours, I’d like to sue this colour scheme to light our piece. I also think microphones should be used, not least because they are always used in political debates and at press conferences/ product launches. Microphones also add an extra layer of mediation to our voice, and just like the text is put through a computer before the performers speak it, the voices of the performers are going to travel through the amp.

These are just ideas at the moment, and will no doubt change a long with the process.

Technical Wizardry

While my practical explorations is still taking form, Kobi has been busy creating a Markov chain (that’s our test generator). Not only has he created one, but he has also made it accessible. This means I can use it anywhere, and he doesn’t have to be present. I’m hoping this is going to speed up the practical side of things!

Here’s what it looks like:
Screen Shot 2018-03-12 at 1.17.54 PM

The first option is to select the ‘N Grams’ – this will determine how random/jumbled the output will be. 1 would be almost entirely nonsensical, with 5 being very similar to the original input.

‘Number of sentences‘ determines how many sentences will be output

The large box is where we put our ‘input text’.

So lets have a look at some outputs. For this example, I decided to pick a topic of climate change. I collected online responses from speeches and tweets from Merkel, May and Trump. The idea is that the Markov chain will merge all of these together. Here are some results
Screen Shot 2018-03-12 at 1.39.01 PMScreen Shot 2018-03-12 at 1.39.56 PMScreen Shot 2018-03-12 at 1.40.36 PM

As you can see, these responses are contradictory, and amusing. As expected, the sentences are slightly difficult to read, with some odd punctuation. I quite like this aspect of imperfection, somehow it makes the machine seem more human. I also think that the contradictory nature of these responses is fairly amusing. The fickle responses, which seem to agree and disagree with both sides of the argument are my favorite, I think they really bring to light the ‘bullshit’ a lot of people feel from their politicians. This contradiction is also pretty representative of our media, particularly newspapers, which often back so many different sides of the argument, but seldom offer us a resolution.

It’s getting difficult…

So we encounter our first problem with technology. Algorithms are very useful, as are machines in general, if you tell them exactly what you want them to do. The markov chain generates text exactly as instructed, based on probability – in a way which I am learning.

This week we took responses from world leaders (Trump, May, Merkel etc.) and used the Markov chain the merge them together. We looked at these world leaders ideas, and responses to questions they’ve been asked throughout their career. I wanted to get a sense of what the world thinks about something.


The idea was to ask a performer a question on a topic, lets say climate change. We’d look at all the material we’d gathered from all the world leaders on climate change. We put it through the Markov chain, and get the performer to read the responses. The result was a conflicted nonsensical response, which was contradictory, although did adhere to some basic rules of language. The problem was, it just wasn’t’t that interesting. After a while the nonsensical response became boring, and tired. Even when we tried to change it up and move on to another topic, the nonsensical response was just very flat and boring.

This could be for many reasons. Just reading text in performance, without animation, without movement, without set design can be boring. Perhaps it’s now the job of me to add movement, to think about what could compliment this text in performance, but I have a feeling it may be worth using our clever algorithm for something else.

What is also apparent here is the difficulty of collaboration in general. Kobi has a great knowledge of algorithms and such, but I do not. So when I say ‘lets use the computer to generate text’ there are infinite ways of doing this, and depending on exactly what sort of test we want, depends on what algorithm, language, format we use. This is a steep learning curve, and ultimately the technology will be more responsive, the more I know about it. Between Kobi and myself we are just about managing, so we’ll keep working on it and see what we come up with.

I’ll post some results from the Markov Chain in my next post, while I continue to work through some more interesting ways of using this technology in performance (or at least try to)

Next Workshop Plan

Following on from last week, we’ll be looking at how we can use questions/answers, in the form of a debate/ press conference. We’ll be using a live performer (me) and the computer generated text (Thanks to Kobi)

Scenario 1 – Computer generated responses

Pick a few frequently debated topic, anything that lots of people have responded on, these could include: Climate Change, Education policy, LGBTQ rights, Brexit, Immigration, NHS etc.

Collect responses from politicians, tv debates, newspaper articles. These should be archived in folders, relating to the topic e.g. a file for climate change responses

 

Collect questions from news readers, live TV debates, radio debates, and save these in the same folder.


Merge the responses together using the Markov Chain. For now, only merge responses on the same topic. For example, if the topic was climate change, you should have gathered responses on climate change from many people, Theresa May, Donald Trump, Local Councils, The Green Party, UKIP etc. These responses should get merged into a single response.

Ask the performer the questions.

The performer is to read the merged  responses, for the first time in practice. The idea of randomness and chance is important. The performer should read with as much conviction and sincerity as possible, even if the response is nonsensical.

Development ideas: Play with the arrangement of the responses, the speed and delivery. Could it start slowly and get faster? Do the questions get more difficult, do the responses get more nonsensical? Does the performer get more emotional about it?

Scenario 2 – Computer generated questions

Use the questions collected in scenario one. Run these through the Markov Chain, again only merge questions relation to the same topic.
The performer is to improvise and answer these questions live in the moment. This should be the response from the performer and not the computer generated responses from before. This can be done with many performers or even audience members.

Other areas for exploration

Reddit AMA- Using answers and questions on the controversial topics explore with Reddit’s AMA (Ask Me Anything) pages. Reddit API is free, so it is possible to collect this material pretty quickly, and this could also be an area where we use bots in performance?

Fake Question Time

DISCLAIMER: I had recorded our rehearsal, wth plans to include some footage on this blog. Sadly I neglected to switch the camera on. So this blog will have to serve as documentation for this week.

So this weekend marked our first trip into a studio space to experiment with some of the practical ideas we explored last week. We were specifically looking at tweets and twitter bots this week. Some of the specific questions we looked at:

What happens when we put someones tweets through the algorithm?
Can we recreate a persons tweets?
What can we do with a twitter bot?
Can we merge two peoples tweets?
How can we use this test in performance?

We started to feed peoples tweets through the algorithm, and did get some funny nonsensical results. The most amusing result was when we entered Donald Trump tweets into the algorithm, along with British comedian Adam Hess’s tweets. The algorithm merged these two very different tweet styles together. Even just reading these tweets was hilarious, but we felt it lacked something by way of performance. Simply reading these tweets didn’t seem enough!

It was only when we started using bots that the process got interesting. If you’re unsure on what a bot is, its short for ‘ web robot’. Many types of bot exist, but generally the term is used to describe a program which runs automated tasks over the internet.So a Twitter bot can autonomously tweet, retweet, like, follow and direct message – without human intervention. Philip Auslander (2002) writes about bots in performance and how they present with a new type of liveness.

We began to think about what the bots could do, and we thought about getting them to ask the performer questions. In a similar way to Forced Entertainment’s Quizoola (click here for more on Quizoola). Popular site reddit often hosts ‘Ask me anything’ (or AMAs). Where a user ‘hosts’ an AMA and other users from around the world can log on. the format is very popular and has included celebrities and even presidents! What if we did a live AMA, but instead of people asking the questions, it was bots?

Screen Shot 2018-03-05 at 1.05.54 PM
Or similarly, what if people did ask questions, but the the answers were written by bots? Sadly our rehearsal time came to end before we got to try this out. But next week we will be exploring this in more detail. Think of a BBC question time, where the responses to the questions are a mash-up of previous responses – this is what we’ll be using the Markov Chain for next week.

Kobi will be working on an ‘accessible’ format of the Markov Chain, to enable me to use it, when he isn’t available (currently we can only access the text generating algorithm on his computer)

Ref:
Auslander, P., 2002. Live from cyberspace, or, I was sitting at my computer this guy appeared he thought I was a bot (Recording technologies). Paj-A Journal Of Performance And Art, (70), pp.16–21.

Practical Experimentation

Creative Interactions – Workshop Ideas

From our mind mapping session and having a look at how the algorithm works to generate text, we came up with some of the following initial ideas. Over the next 2 – 3 weeks, we’ll be trying these ideas out with some studio experimentation. The space will be booked at Salford, and we’ll be video recording the sessions for documentation. Each session will last between 1 -2 hours. The idea here is to experiment with some initial ideas. One will be selected to explore in more depth, or something else may arise from these workshops that we choose to focus on.

As both parties are interested in exploring and learning, this is an extremely important part of the process. We will be documenting the session, in case either of us want to revisit the ideas in the future.

In all cases we are experimenting with text. Text can be read live, recorded, projected, sung, signed. I’ll be trying to explore the different ways of delivering the text in each of the below examples. I’ll be using some of Boal’s ‘Newspaper theatre techniques (Click here for a summary of Boal’s techniques) to explore this text in workshop.

  1. Twitter Bots
    Can we use twitter bots in performance? A few ideas to look at:
    – Using the algorithm to predict someone’s future tweets. If we load somebody’s tweets through the algorithm (Celebrity or audience member?) and see what the algorithm generates. These can be read aloud, or we get twitter bots to tweet them out. Even tweet each other. This raises questions about privacy, dataveillance, and what we put online. It also gives us some idea of how accurate the algorithm is at posing as a human.
  2. Conversations with our lost loved ones
    This may prove too emotionally difficult. The idea is to feed the algorithm with my late father’s emails, texts and any other digital text I have from him. After losing my father last year, I have inherited his computer collection, his calendar appointments (he still has appointments I get reminders for today). Could I feed his emails etc. into the algorithm, and receive a posthumous email from him? Is this going to be uncanny or just plain ridiculous? There’s only one way to find out.
  3. Rewriting Shakespeare (and other famed playwrights)
    The idea here is to feed existing play text into the computer, to see if it can recreate something that could be considered just as celebrated as the original. Heightened language is often difficult to understand for most anyway, so I imagine the nonsensical aspect of this may not be as strong. We could also try contemporary texts, Tennessee Williams, Arthur Miller, Martin Crimp, Simon Stephens. The postdramatic playwrights here will have interesting results as they already write in non-traditional formats. The delivery of this will be particular interesting, can the computer read the text?
  4. Politics Formulas
    It’s often thought that regardless of the political position, politicians speeches have a format. Can the algorithm identify these, and produce some inspiring speeches? Does a computer generated speech actually differ from a democratic one. What about past political leaders, the controversial ones (Hitler, Churchill, JFK, Stalin)?