The Stay at Home Residency – part 3

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

In the second blog post I looked at how I approached filming. In this third and final blog post I’ll be detailing my sound making process and sharing the finished film.

The next stage in making this film was working on the sound. As you can hear in a couple of the clips in the previous blog post the area that I live in is really really quiet! Everyone in the local area was using the Summer time to sit outside bathing in the sunlight. Was very relaxing for sure but recordings of the ambient background noise didn’t make for an interesting soundtrack. There was once the sound of a wood chipper but otherwise it was mostly silent. At times me playing music was the loudest sound!

Instead I took to making recordings from within the home. This process made very aware of the variety, and at times lack thereof, of sounds in my home environment. There’s lots of shuffling, tapping, television and dampened thud sounds. With the exception of the television, the place with the most variety of sounds is most definitely the kitchen and so most sounds I used came from there. There’s sounds of glass, metal, wood, and water and even from inside the fridge!

If you’ve been following any of my work for a while you’ll see that I’ve done a lot of live coding performances over the last two years. I like the liveness of this process and so chose to incorporate it into my sound making process. I took the samples that I recorded into TidalCycles and got coding! Here’s some of the recordings along with variations on the code that created them.

setcps(50/60/4)

d1
$ sometimes (fast 2)
$ whenmod 8 6 (# speed 0.5)
$ slow "4 2? 1"
$ sometimes (# accelerate "-0.05 0 0.02")
$ loopAt "1 0.25?"
$ stutWith 4 (1/8) (# speed 1.25)
$ sound "bowl*<1.5 2 1> blinds*<1 2>"
# n (irand 3)
d2
$ sometimes (fast 1.35)
$ striate "2 4 8"
$ stutWith "8 2 1" (1/16) (# speed (irand 3-1))
$ sound "droplet*4"
d3
$ every 7 (# speed "0.5")
$ slow 4
$ sometimes (striate "8")
$ stutWith 8 (1/8) (soak 4 (|+ speed 0.15))
$ juxBy (slow 3 $ sine) ((# speed 2) . (# accelerate "-1"))
$ sound "stackingplates*2 [whack(3,9)]"
# n "1 2"
# pan (perlin)
d4
$ hurry "2 1 4 8"
$ sound "whack*4"

Although not the same as the drone soundscapes that Rodell Warner creates I thought they provided a lot of texture and would work well as an accompaniment to a drone soundscape. For that I loaded up Ardour and the Helm synthesiser.

The process of making and putting together all of these separate parts was in no way linear. The tutorials I followed all recommended writing a script or having a plan and I certainly didn’t have either. For this exploratory stage of my journey into film making I think that was mostly ok but for anything in the future I would at least consider what kind of atmosphere, emotions, or general message I wanted to convey.

The actual editing process was a big chore. Open source video editing software on Linux still leaves a lot to be desired. Despite there being a number of video editors available nearly all of them have one failing in common: stability. With just a few HD resolution clips and no effects or transitions I was experiencing a lot of stuttering during seeking and playback and crashes when rendering. This, of course, caused a lot of frustration and definitely resulted in me spending less time editing than I would have liked to. For recent videos I’ve used Olive which has worked really well – seeking on the timeline is fast and there are few crashes – but at the time of editing version 0.2 was still too unstable to be usable.

After that last hurdle I feel I have produced a film that demonstrates a lot of what I’ve learnt.

The film, titled Windows Explorer, represents my desire to be out in the world again. Like pretty much everyone my world has shrunk and my engagement with the world comes from looking out of and into various windows, whether that be out of my office window or into a Zoom, Skype, Teams, Jitsi or whatever window.

With Thanks

This residency was certainly a big earning experience. In a conversation with the curators at the gallery I expressed concern that I wasn’t making enough, or that everything that I was making was, well, crap in comparison to the digital art portfolio that I’ve built up over the last decade. They reassured me that I was trying something new and so I can’t be expected to be immediately great at it. Even if I was in a situation where I had access to a team and equipment, a month isn’t really a long time to fully learn a new skill and make a complete piece of work using that skill. This really helped to put into context that this residency was time for me to reflect on my practice and to learn at my own pace.

From this residency I feel a lot more prepared to make narrative film, even if it’s a 1-minute film. I’ve already upgraded my equipment in preparation for future projects and have more knowledge of the multi-level process that goes into making a film.

Many thanks to The New Art Gallery Walsall for this opportunity ๐Ÿ™‚

The Stay at Home Residency – part 2

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

In the first blog post I looked at my influences and research carried out before I started making work. In this second blog post I’ll be showing some of the filming I did.

With the research conducted and panic now over I started filming again. I began by filming various things in my home. I tried to focus on shots that would have some movement in them, even if it were only background movement. Because of this most of my shots look out of a window. Although the background is blurred whatever movement there is – be it the trees, people, or lights turning on/off – makes the still shot that little bit more interesting.

Next, I decided to bring out my projector and see what I could do with it. By now my projector is at least seven years old (I originally purchased it for a BYOB event in 2013) and so not only is the projection quality quite poor, there are glitchy lines running through the the projection.

I had thought about making animations to project onto various objects, but I didn’t want to turn this into an animation project. I’ve long used my Glass video when experimenting with projections and I liked how to made any surface it landed on just way more interesting. To replicate this saturation of glitchy colour and movement I installed a copy of waaave_pool onto a Raspberry Pi, connected a webcam to it and pointed the webcam at random surfaces in the room.

Video waves itself is a bit like a video synthesiser, working primarily with webcam video input. With that installed I made some things like this:

I liked these projection experiments most when they were really subtle. I didn’t want the projection to overpower the surface and render it invisible or irrelevant. For example, in one experiment I projected onto cushions, which looked really great but the cushions got lost behind the projections.

I also played with a strip of LED lights I had from a previous project. They can be programmed to to flash quickly but they seemed to work best when they were pulsating slowly, which very much matched the pace of the shots I had filmed so far.

In the next blog post I’ll be detailing how I made sounds for the film and sharing the finished film.

The Stay at Home Residency – part 1

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

The New Art Gallery Walsall has adapted its Studio residency programme in the wake of the Coronavirus pandemic to support three artists based in the West Midlands to produce work from their homes between May and July this year.

Following an open-call to artists based in the West Midlands, the Gallery received 60 varied proposals from a diverse range of artists working across the region. The many challenges that artists are facing during lockdown were well articulated. In selecting, we were keen to see opportunities for artistic and professional development during these challenging times, to support creative approaches to practice amid imposed restrictions and to explore the benefits and possibilities of sharing with an online audience.

It’s been some months since the residency ended and I really learned a lot. In this three-part blog post series I’ll be talking a bit about the month of learning and creating, the struggles I had, what I did to overcome them, and some of my thoughts on the final outcome. In this first blog post I’ll be going over my research and influences.

My reason for doing the residency was to explore ways of making work without a computer. Quoting from my application:

Creating my digital video works is a very stationary process, requiring me to spend long hours sat in my home office at my desk on the computer. I have long had a desire to step away from the desk and learn film and sound production techniques. I already own much of the required equipment including a DSLR camera, microphone and tripod. I have mainly used these to document events or exhibitions.

This residency would grant me the opportunity to step into learning film production techniques. I will study available materials (digital books and tutorial videos) and implement what I learn when creating the films.

Looking back over the last 10 years of my practice I have noticed that most of my work has been computer generated videos and animation.

Loud Tate: Code

Most of these works are generative and, much like animated gifs, they don’t have an extensive narrative and are best viewed on repeat. This isn’t a downside to the works, but making something with a narrative using filmed footage was definitely of interest to me for this residency.

I began the residency exploring the technical processes involved in film making. I have used cameras for a long time but often I don’t explore their full capabilities. I usually just leave the settings on Auto and most of the time it works out fine! This is similar for lenses. The camera I owned at the time of the residency was a Olympus Pen F together with a 45mm and 17mm lenses. I only ever really understood that the former is good for portraits and the latter for landscapes/outdoor but still didn’t understand why.

I wanted to understand this and more so spent a lot of time watching videos and reading tutorials. Two really interesting videos were The Changing Shape of Cinema: The History of Aspect Ratio and The Properties of Camera Lenses from Filmmaker IQ.

These two videos, and the many others I watched late one evening, went into far more detail than I needed about film, the history of cinema, and equipment. I also didn’t own 99% of the equipment and resources the videos mention, but it was really interesting to know how all those things go into making a film and achieving a certain cinematic look.

The next set of videos that was really insightful was the Crash Course Film Production series of videos. The Filmmaker IQ videos focused on specific details about film making whereas these videos were perhaps more relevant to me as they were produced from the viewpoint of someone with no knowledge wanting to know what goes into making a film. The third video in particular, The Filmmaker’s Army,is particularly enlightening as it explains a lot of the roles in a film production and how each work together to make a finished film.

One of the main things I took from watching this series of videos is that there is a lot of planning that goes into a film. Depending on the scale of the project the time between writing a script and filming can be years! And when on a film set a lot of the roles are there to ensure each person is doing the correct things at the right time.

Although all of this was really exciting and inspiring to learn at the beginning of the residency there was one big problem: Almost all of it would not be applicable to me at this time. Quoting my application:

Using tools and materials I have in my home – which include programmable lights, a projector, screens, and other electronics – I want to create a series of short abstract films that explore the use digital art, light, and projection to illuminate my home and immediate surroundings. The everyday objects in the home, the grass outside, the brickwork and more will act as both creative material and canvas for abstract projections.

I was strict in my desire to create a film only within the home. This meant that I couldn’t acquire stage lights, microphones or other equipment. I had to use whatever I had in whatever filming conditions I was given. Still, These restrictions could hopefully provide inspiration.

Early on I struggled to make anything interesting. I filmed whatever I could find in my home but it was all very static and at times boring. It was then that I realised that the domestic environment, especially during lockdown, is a pretty boring place! In my household there are only two people and the environment doesn’t change that much. It’s not like the outdoors where the environment changes, or like a gallery space which can reconfigured and has access to lots of equipment. In short, everything is just static. I was very worried that whatever I made would be very boring to watch.

I started to look to other films and artists for inspiration. I was browsing Mubi one day and saw a movie called Villa Empain by Katharina Kastner. I had no idea what it was about at the time but it was short and gave me a distraction from the panicking!

It turned out to be exactly the kind of film I needed to see. To me it was a series of animated portraits of the Villa Empain building. A lot of the shots in the film were static, featuring minimal movement from the pool water, trees, or sun shining through the stained glass windows. It was quite a meditative film. It helped to show me that a film didn’t need to be action packed to be interesting.

I also remembered the work of Rodell Warner (having first seen their work in 2019 at bcc:). In his Augmented Archive series he’ll take an archive picture, add a drone soundtrack to it and animate it using a flickering effect (plus his own 3D sculptures). Of course there is a much deeper concept than my very technical description (and you should see more of his work to understand), but seeing his work showed me that there are ways to add depth and movement to static imagery.

In the next blog post I’ll be detailing the process of filming shots.

Addictions and Habits

Bcc:, Decoy Magazine’s monthly e-mail subscription programme, ended in 2019. I had made an exclusive artwork for it back in 2018 that was only available to people who subscribed to it, and then in September 2019 at the IRL exhibition at Vivid Projects. If y’all didn’t catch that show here’s my work below:

When you identify something toxic in your life you recoil from it, only to be drawn back in again and again. Addictions and Habits is inspired by how technologies built on the idea of enriching our lives have only amplified our anxieties and made us more physically and emotionally vulnerable

Here’s the really nice essay from Lauren Marsden which accompanied the release of the artwork:

This month, we are very honoured to be featuring UK-based artist and curator Antonio Roberts. With an extensive body of work that entangles glitch, appropriation, sculpture, screens, digitalia, and interaction, he is well suited for the task of questioning and confronting the limitations of copyright law and the corporate appropriation of cultural aesthetics and technologies. Here, with Addictions and Habits, we can imagine either side of the issue. For one, the hand of the creator that opens itself freely to the gesture of sharing, remixing, re-circulating (ad infinitum), and then, perhaps, the other handโ€”the one that closes the deal, signs the cheque, gives a comforting pat on the back, or plucks an idea out of the ether to secure its containment and regulation. Within this paradox, we enjoy the exuberance of Antonio’s work and see a space for liberation among his many fragments and shatters.

Thanks to Lauren Marsden for including me in Bcc: ๐Ÿ™‚

Installing Bcc: at Vivid Projects part 1

I took a bit of a break from writing the Development Updates. September was pretty busy with Bcc: (more on that below) and then I was completing a commission for Will’s Kitchen/The Shakespeare Birthplace Trust and preparing for my solo exhibition, We Are Your Friends.

With all of that now completed I’m writing a few posts about one project in particular: Bcc:

The Bcc: exhibition opened at Vivid Projects on Friday 6th September. It was a collaboration between Vancouver-based Decoy Magazine and Birmingham-based Vivid Projects. The exhibition featured a curated selection of works from Decoy Magazine’s online art subscription service called Bcc:. The basic premise is that each month you’d get specially commissioned art in your e-mail inbox.

Bcc:

Bcc:

After being part of Bcc: in 2018 I suggested to Lauren Marsden, the Curator and Editor of Decoy Magazine, that it could possibly become an IRL exhibition at Vivid Projects. At the time I was still working there so I worked on getting most things in place to get the exhibition going. Then I left in 2019. Because of my prior involvement in Bcc: and the massive technical challenge involved in installing the work (more on that later) I was asked to produce the exhibition.

Depending on how you look at it the technical aspect of installing the exhibition could be very simple. Most of the works in Bcc: were short movies and animations/gifs, and Vivid Projects has long used the Adafruit Raspberry Pi Video Looper to handle playing videos.

Some works, however, required more attention. There were some works that were interactive websites, some that were animated gifs and some that require additional hardware. Prior to the exhibition this probably didn’t present any problems as the works were viewed by most likely one person on their personal phone or computer. The challenge comes when it’s on a shared computer in a public environment. Additionally, operating the works needs to be as hands off as possible. That is, I didnt want it to be the case that myself or another technician had to be on hand every day to go through complicated procedures to turn on all of the work. They needed to be automatic. With 17 works each needing their own computer/Raspberry Pi there was a lot to prepare. Over the next few posts I’ll take you through some of the works and their technical challenges:

Playing gifs on a raspberry pi

Of the 17 works on show in the exhibition 10 were animated gifs. To stay true to the small nature of animated gifs (don’t get me started on the concept of HD gifs) we decided to display the gifs on the Official Raspberry Pi 7″ Touchscreen Display. This proved to be a really good decision overall. It required that visitors get really close to the works and spend time with a format that can sometimes be a bit throwaway.

Bcc:

As mentioned before, for a long time Vivid Projects has used the Adafruiit Raspberry Pi Video Looper software to play videos. It works (mostly) great with the exception that it doesn’t play animated gifs. The main underlying software, omxplayer, only supports video files. Even the supplied alternative player, hello_video, also only plays video files.

Your immediate though might be to just convert the animated gifs to video files. Whilst this “works” there is always the danger that in converting a file you reduce the quality of it. For an artist like Nicolas Sassoon, who makes pixel-perfect animations that match a specific screen size, this would be unacceptable. So I went on a journey to find a way to play gifs.

The requirements for the software is that it should operate in a similar way to the Adafruit software and play a gif on loop with little or no pause between loops. It should play in the frame buffer (i.e. without needing to load the desktop) and it should make use of the GPU (helps prevent screen tearing). And for a bonus it should be able to play a series of gifs one after the other. Simple, right?

TL;DR: There isn’t a reliable way, I had to convert to a video.

Some of the solutions I saw were saying to use Imagemagick to play the gifs. This wouldn’t work as I would need to launch the desktop. Then, I’d need to script it to go full screen, centre the gif, change the background to black etc.

FBI and FIM don’t support animated gifs, although they are useful if you ever want to play a slideshow of static images.

feh is another image viewer that uses the framebuffer. However, it also doesn’t support animated gifs and, according to this response from the author, this is by design.

This suggested solution of converting to images kinda works but doesn’t take into account if each animation frame has different durations (see this GIMP tutorial for example on how to use it). With that in mind for this to work I would need to get the duration of each frame in each of the 10 gifs, separate the gifs into their individual frames, and then tell feh to play each frame for it’s specified duration. So, this method could work but it would require a lot of work!

This thread on the Raspberry Pi forum did provide a possible solution which I didn’t try but it also pointed me to FBpyGIF, which was certainly the most promising of the solutions. However, a couple of problems prevent me from using it. Still very promising though!

Finally, I tried one of the various GIF Frames that play a folder of animated gifs on loop. Sounds like it works but there’s screen tearing on some fast-moving gifs. I’m guessing this is because it doesn’t have hardware acceleration and/or because it uses Chromium to play the gifs.

Soooooo after all of this I felt a bit defeated and I decided to just convert the animated gifs to videos. I used Handbrake and noticed no loss of quality in the conversion. Even if there was, on a 7-inch screen it’d be quite hard to see. Using the Adafruit player/omxplayer I was initially having some issues with aspect ratio. Even with –aspect-mode set to fill stretch or letterbox, the videos were being stretched to fill the screen. To illustrate take the following video, which is 1024×68/4:3.


(fyi it was made using Natron and this script to add in a timecode)

When play on the screen it is stretched to fill the screen.

The Raspberry Pi touch screen has a resolution of 800 x 480, which is a 5:3 aspect ratio. Most of the videos and animated gifs were HD/16:9 so would be letterboxed by default.

So I had the bright idea of padding each video so that it was exactly 800×480.

Now, the Adafruit player/omxplayer says it can play any video which is H.264 encoded but I’ve had some troubles in the past, so whenever I’m given a video I usually convert it using Handbrake with the Fast 1080p30 preset. These settings have always worked for me but for some reason on this occasion the video was stuttering a lot! What was strange was that the original videos (the animated gifs converted to videos without resizing) played fine. Even after they were run through Handbrake. Why when they were converted to 800×480 size did they stutter?

It was two days before the exhibition opening that I remembered that some time in 2016 I had an issue with omxplayer in that it didn’t play videos if the video didn’t have an audio track. Why? I don’t know. Maybe audio was the problem in this scenario too? It was worth a try and so I decided to disbale the audio track using the -n -1 option. This doesn’t just turn the audio down, it disable encoding of it. And guess what. IT WORKED!

I have absolutely no idea why this worked or why the error ocurred in the first place. Here’s the extra arguments that I included on line 107 of video_looper.ini.

extra_args = --no-osd --audio_fifo 0.01 --video_fifo 0.01 -n -1 --aspect-mode stretch

All of that just to play animated gifs! Now that I had the code perfected copying it to all of the other Raspberry Pi’s was simple. If the aforementioned softwares had animated gif playback by default then this would’ve been solved much quicker but for now it seems the most reliable way to play animated gifs on a loop on a Raspberry Pi is to convert them to video.

Hybrid Landscapes

https://www.digitalcatapultcentre.org.uk/hybrid-landscapes-artists/

Hybrid Landscapes is an exhibition of recent work by eleven pioneering artists whose projects use, respond to and subvert digital technologies in surprising and unexpected ways. As lived experience plays out simultaneously across natural, built and networked worlds, new perceptions and perspectives are created.

The eleven artists work in a range of artistic mediums โ€“ from photography and sculpture to software and code โ€“ and each has their own area of research. They are unified by an approach that offers new ways to imagine, inhabit and locate citizens within emerging hybrid terrains. Together their works consider some of the key social and cultural questions we might ask ourselves about emerging digital cultures, products and applications, offering complementary and alternative views.

Blood Sport – Live at Cafe Oto video

On 5th May Blood Sport released their latest LP, Live at Cafe Oto which, as the name suggests, is a live recording of a 40 minute set they did as part of their residency at Cafe Oto.

To coincide with its release Blood Sport asked me to create a one-take video. The video below shows track two from the LP, Melts Into.

The full 40 minute video will be made available at a later date. In the meantime you should buy their LP. They will be performing alongside Heavy Lifting at Supersonic Festival on June 16th.

Blood Sport - Live at Cafe Oto

TRANSFORMERS: A Code and Data-Driven Animation Screening, 6th February

On 6th February I’ll be part of the TRANSFORMERS screening happening at the College Art Association Conference in Washington DC.

sonification_caa

Computer programming is an often invisible force that affects many aspects of our contemporary lives. From how we gather our news, maintain our libraries, or navigate our built environment, code shapes the interfaces and information they connect to. Artists who work with these languages as material can critically excavate code and its effects. The works included in this screening include animation and video that are produced through the use and manipulation of code and/or data.

The selected works will be screened during CAA on Saturday, February 6th from 9:00am- 10:30am in the Media Lounge and is simultaneously available online through the New Media Caucus Vimeo Channel.

The screening is organised by Darren Douglas Floyd, Artist/Filmmaker, Mat Rappaport, Artist, Columbia College Chicago, and A. Bill Miller, Artist, University of Wisconsin, Whitewater. My contribution is a shorter live performanec of the Sonification Studies performance I did at glitChicago in 2014. I’ll update this post with the new video once the event is over. Video below:

Internalised

Back in April Film Division took part in the 2011 Sci-Fi-London 48 hour film challenge:

We’ll give you a randomly generated film title, some dialogue and a prop list. You’ll then have 48 hours to write, shoot and edit a complete five minute film… hard work but fun!

I provided some glitches – using the What Glitch? scripts – and other graphics for this film, which you can watch below. My glitches are at about 3:16.

Unfortunately the film didn’t win but it did go on to be shown at Cannes in a Van. Go team Film Division!

Despite the film having been shown elsewhere I’ve only just watched it and I must say I’m rather impressed with what a highly dedicated – and possibly crazy (we stayed up ’til 5am editing) – team can come up with in 48 hours. I’m told that a directors’ edit, which will include many of the effects that didn’t make it in time, will be out in the near future. Anyway, enough reading, go watch the film!