Design Yourself

Throughout 2019 and the early part of 2020 I led a programme for Barbican’s Young Creatives called Design Yourself.

What does it mean to be human?
Can technology be used to replicate the pheromone communication of ant colonies?
Can we use technology to mimic the camouflage abilities of chameleons?
Can movement be used as a language, similar to the waggle dance of honey bees?

Inspired by Life Rewired, a collection of young creatives from our Barbican Young Creatives and BA Performance and Creative Enterprise will respond to these questions to explore what it means to be human when technology is changing everything.

Mentored by visual artist Antonio Roberts and in collaboration with four guest artists, the group will create new digital work that explores how scientific and technological advances could allow artists to become ‘more human’ by heightening our natural and creative instincts. As a group they will explore technological impact on sound, movement, language and aesthetics and share their findings through new imaginative works.

The eight participants from Barbican’s Young Creatives were Tice Cin, Zack Haplin, Cosima Cobley Carr, Pietro Bardini, Nayla Chouaib, Evangelos Trichias, Hector Dyer, and Cleo Thomas.

I had the pleasure of inviting some of my favourite artists/art groups to deliver workshops to the participants exploring lots of issues surrounding our relationship with technlogy and the future of humanity. Invited artists were: Laurie Ramsell, Matthew DF Evans, Yoke Collective, New Movement Collective, Erica Scourti.

Over the next few days I’ll be sharing the videos we made over the year and some photos from each session.

Congrats to all of the participants on creating such great work, thanks to the invited artists for delivering engaging workshops, and thanks to Chris Webb for inviting me to Barbican again to work with their Young Creatives 🙂

Motion Interpolation for Glitch Aesthetics using FFmpeg part 0

As you may have seen in this blog post I made use of FFmpeg’s minterpolate motion interpolation options to make all of the faces morph. There’s quite a few options for minterpolate and many different combinations of options that can be used. i had to consult Wikipedia to figure out exactly what the different motion estimation algorithms were but even with that information I couldn’t visualise how it would change the output. To add to this how I’m using minterpolate isn’t a typical use case.

To make things easier for those wishing to use FFmpeg’s minterpolate to create glitch aesthetics I have compiled 36 videos each showing a different combination of processing options. The source video can be seen below and features two of my favourite things: cats (obtained from here) and rainbows.

I’ve slowed it down so that you can see exactly what’s in the video, but the original can be downloaded here.

The base script used for each video is:

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:

In part two of March’s Development Update I explained why I set scd to none and search_param to 400. I could have of course documented what happens when all of the processing options are changed but that would result in me having to make hundreds of videos! The options that were changed were the mc_mode (motion compensation mode), me_mode (motion estimation mode), and me (motion estimation algorithm).

Test conditions

These videos were created using FFmpeg 7:4.1.4-1build2, installed from the Ubuntu repositories, on a Dell XPS 15 (2017 edition) with 16GB memory, a i7 processor and an Nvidia GeForce GTK 1050 graphics card, all running on Ubuntu 19.10 using proprietary drivers.

I don’t have a Windows or Mac machine, and haven’t used other Linux distributions so can’t test these scripts in those conditions. If there’s any problems with getting FFmpeg on your machine it’s best that you contact the developers for assistance.

Observations

My first observation is that the esa me_mode takes frikkin ages to complete! Each video using this me_mode took about four hours to process. I did consider killing the script but for completeness I let it run.

Using bilat me_mode produces the most chaotic results by far. Just compare 026_mc_mode=obmc_me_mode=bilat_me=epzs.mp4 to 008_mc_mode=obmc_me_mode=bidir_me=epzs.mp4 and you’ll see what I mean.

For a video of this length nearly all of the scripts (except for those using esa) took between 30 seconds and 1 minute to complete, and that’s on machines with and without a GPU. This is good news if you don’t want to have to carry around a powerhouse laptop all the time.

All of this reminds me a bit of datamoshing. It’s more predictable and controllable, but the noise and melty movement it creates, especially some of the ones using bilat me_mode, remind me of the bloom effect in datamoshing. This could be down to the source material, and I’d be interested to see experiments involving datamoshed videos.

Let’s a go!

With that all said let’s jump into sharing the results. As there’s 36 videos I’ll be splitting it over nine blog posts over nine days, with the last being posted on 28th March 2020. Each will contain the script I used as well as the output video. Links to each part can be found below:

(mis)Using FFmpeg’s Motion Interpolation Options

Towards the end of the Let’s Never Meet video the robotic faces slowly morph into something a little bit more human-like.

These faces continue to morph between lots of different faces, suggesting that when getting to know people you can never really settle on who they are. To make the faces morph I used motion interpolation to morph between each face. Here’s what Wikipedia has to say about motion interpolation.

Motion interpolation or motion-compensated frame interpolation (MCFI) is a form of video processing in which intermediate animation frames are generated between existing ones by means of interpolation, in an attempt to make animation more fluid, to compensate for display motion blur, and for fake slow motion effects.

For those that use proprietary software there’s a few that can do this, including Twixtor and After Effects.

If, like me, you only use open source software there are a few options but they’re not integrated within a general post processing or video editing GUI.

slowmoVideo

slowmoVideo is an open source application which allows you to vary the speed of a video clip over time. I used this previously for the background images in the Visually Similar Artwork.

For Let’s Never Meet I did consider using slomoVideo again. What I like about it is being able to vary the speed and that it has a GUI. However, development on it seems kinda slow and, most importantly, it requires a GPU. Occasionally I find myself working on a machine that only has integrated graphics (i.e. no GPU), which makes using slomoVideo practically impractical. So, I needed something that would reliably work on a CPU and produced similar if not same visual results as slomoVideo.

Butterflow

Butterflow is another software for motion interpolation. It doesn’t have a native GUI but it has a nice set of command line options. Sadly it seems impossible to install on Linux. Many have tried, many have failed.

FFmpeg

Finally I tried FFmpeg. Pretty much all my artworks use FFmpeg at some point, whether as the final stage in compiling a Blender render or as the backend to a video editor or video converter. I’m already very familiar with how FFmpeg works and feel it can be relied to work an be developed in the future.

I actually first came across FFmpeg’s motion interpolation options sometime in late 2018, but only really cemented my understanding of how to use it in making Let’s Never Meet.

Going through FFmpeg’s minterpolate options was quite daunting at first. There’s lots of options which have descriptions on how they work but I didn’t really understand what results they would produce. Nonetheless I mixed and matched settings until I produced something close to my liking.

The first step in making the morphed video was making original speed video.

I’ve slowed the above video down so you can see each frame, but if you want the original video you can download it here. This consisted 47 faces/images, played one image per frame. In total it lasted 1.88 seconds and I needed to slow it down to at least x minutes, which is the length of the video.

Here is the code that I used

ffmpeg -i lnm_faces_original.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=400'" -y output.mp4

I’ll explain three of the important parts of this code.

setpts

The FFmpeg wiki has a good explanation of what setpts does:

To double the speed of the video, you can use:

ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv

The filter works by changing the presentation timestamp (PTS) of each video frame. For example, if there are two successive frames shown at timestamps 1 and 2, and you want to speed up the video, those timestamps need to become 0.5 and 1, respectively. Thus, we have to multiply them by 0.5.

So, by using setpts=40*PTS I’m essentially slowing the video down by a factor of 40. For this video I took a guess at much I’d need to multiply the video of the faces to make it match the length of the video. If I wanted to be exact I’d need to use some maths and divide the frame count of the video (5268), divide it by the frame count of the face video (47) and use the output (112.085106383) as the PTS multiplier.

scd

scd is probably the most important part of this code. It attempts to detect if there’s any scene changes and then not perform any motion interpolation on those frames. In this scenario, however, I want to interpolate between every frame, regardless of whether they appear to be part of the same “scene”. If you leave scd at the default of fdiff and scd_threshold at 5.0 ffmpeg tries to decide if there’s enough difference between frames. Here’s what that would’ve looked like:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:me_mode=bidir:vsbmc=1:search_param=400'" -y lnm_faces_scd.mp4
(without setting scd the defaults are assumed)

Not ideal, so I disabled it by setting it to none.

search_param

This one I don’t quite understand but I understand how it affects the video. If I were to leave the setting with the default value of 32 then you can see that when it interpolates there isn’t much movement:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=32'" -y search_param_32.mp4

With the value of 400 which I used:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=400'" -y search_param_400.mp4

And with the slightly ridiculous value of 2000:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=2000'" -y search_param_2000.mp4

The biggest difference is clearly between setting the search_param from 32 to 400. At 2000 there’s only minor differences, though this may change depending on your source input.

It’s morphin’ time!

With all the settings of minteroplate now set I created the final video:


(I reduced the quality of the video a little bit to save on bandwidth)

I quite like the end results. It doesn’t look the same as the output of slowmoVideo in that it the morphing happens in blocks and doesn’t look like the dust grains output of slomoVideo. However, in using FFmpeg I can now use a familiar program that works on the CPU, even if it does take a long time!

Producing audio for Let’s Never Meet

For the majority of my career in art I’ve been primarily known for my visual artwork. I’ve dabbled in making noises with my Sonification Studies performances (which may make a comeback at some point) but it’s only since my 2018 performance at databit.me that I’ve regularly made and performed music.

On the performance side I’ve mostly used TidalCycles. You may have seen that I have been doing live streams of my rehearsals.

Outside of live coding I’ve spent most of my time getting to grips with software-based synthesisers and DAWs. When asking for advice on this most people told me to use software like Ableton. What these well-meaning people may not realise is that I exclusively use (Ubuntu) Linux and only open source software. This gives me the freedom that open source grants me but boy does it sometimes cause headaches! Plenty of people use the open source options available to them but this approach is still the road less travelld and so I’ve found myself sometimes asking lots of questions and either not getting a response or getting the response that what I’m trying to achieve is not possible.

And so for the last year or so that I’ve creating workflows that work for me. For this I’ve been using Ardour, which is a pretty good cross-platform DAW. So far I’ve produced soundtracks to two of my artworks, We Are Your Friends and Let’s Never Meet. In this Development Update I’ll go over a little trick I learnt whilst making the soundtrack for Let’s Never Meet.

In short, Let’s Never Meet is about meeting people over the internet. The soundtrack is actually a remix of a an Android alarm ring tone.

It’s not an alarm tone that I use myself but it was ambient enough to work in an outdoor setting for an extended period without getting annoying. Plus using a sample from my phone just somehow felt appropriate, if you know what I mean. After many many many hours of producing my remix sounded a bit like this:

I was really happy with the results but it felt like there was something missing. It was pretty samey throughout and I think there needed to be some kind of buildup or change in pace. To address this I decide to add some percussion. I turned to the glitch sample set that is downloaded when you install TidalCycles. It has a nice percussive quality and definitely sound glitchy and electronic, again in fitting with the digital theme of the piece.

As far as playing these samples I did consider manipulating them in TidalCycles and importing the whole recording file into Ardour, but I also wanted to get better with Ardour so sought a solution within that software. The glitch pack contains eight samples and I needed to be able to load them into Ardour to trigger/play at will. The drumkv1 plugin is the perfect solution to this.

It’s a sampler where you assign samples to midi notes. To play the notes you could use a midi keyboard, send the notes from Pure Data, or basically any software that can send midi. I decided to use the x42 step sequencer to input the midi notes. It’s a very simple step sequencer originally built for the MOD platform but, because it’s an lv2 plugin, it can run in any host that supports it.

Using this sequencer I could easily create an eight-step loop that starts simple builds up with more drums over time.

With the samples assigned to midi notes I just needed a way to press the pads in the step sequencer. I have two physical controllers, a Launchpad X and a MPK Mini. The latter only has two rows of four drum pads. The former is an 8×8 grid but I can’t yet program it properly to work with the software I use (more on that another time). In any case, in looking into how to use the Launchpad X with x42 the plugin’s author, Robin Gareus, told me that it’d never be possible because x42 doesn’t accept midi input 🙁

I accepted that using a software or hardware midi controller was a no go. I would have to use a mouse, which wasn’t ideal but it would work. The plugin’s author did recommend that I look into BSequencer. It appears to accept midi input but with a deadline looming I didn’t want to spend more time on this by learning yet another software.

Using my mouse in Ardour I started to record the input of me playing the step sequence but I noticed the midi notes from x42 weren’t being recorded.

I found this very strange. drumkv1 was blinking to show it was receiving midi but nothing was being recorded. After some research I discovered that it was because Ardour records external midi. When I loaded x42 as a plugin within Ardour it was sending midi internally. To get around this there are two solutions:

I used Carla as a plugin host to load x42 and then sent the midi output to the correct track in Ardour.

Carla showing x42 being connected to Ardour

This worked but I was getting a lot of latency with the input and the notes didn’t align properly. This is probably easy to solve by tuning my system to reduce latency (I already use the realtime kernel), or maybe something that I was doing wrong, but again with a looming deadline I didn’t want to do anything drastic and time consuming.

The second option was to send the output of x42 out into another application and then have that external application send its midi input into Ardour. To do this I loaded a2jmidid, connected the track’s midi output into it, and then connected the output of a2jmidid into the track in Ardour.

Screenshot showing ardour connecting to a2jmidid and back again

When I started up x42 again in Ardour and started clicking on its pads it all worked as expected!

After all of that effort I recorded myself building up the percussion. Here’s the finished track 🙂

I’ve been having a lot of fun making music, so expect more of it from me in the future.

Seamlessly loop Wave Modifier in Blender

Seamless animation

For the Improviz gifs one of the requirements that Rumblesan set is that the gifs loop seamlessly. That is, one would not be able to tell where the gifs beings and ends. In Blender making an animation seamless is pretty easy. There’s lots of examples out there but for completion here’s my simple take on it.

With the default cube selected press I and then press on Location. This inserts a keyframe for the location (this menu can also be accessed in Object > Animation > Insert Keyframe). On the Timeline at the bottom move the animation 20 frames. Then, move the cube to somewhere else.

Now press I to insert a keyframe for the location. Ta da! You now have an animation! To make it loop we need to repeat the first keyframe. On the Timeline go forward another 20 frames (so you’re now on frame 40). In the Timeline select the first keyframe. Press Shift + D to duplicate it and then move it to frame 40.

Set the end of your animation to be frame 40. Now when you press play (space bar) the animation loops seamlessly!

As an aside if you’re interested in animation check out Eadweard Muybridge. And if you’re into Pure Data check out this tutorial I made in 2017.

Seamlessly loop Wave Modifier

So, that’s one easy way to make a seamless looping animation. However, Rumblesan was more interested in are gifs that warp and morph. This is one example he sent me.

via GIPHY

In Blender one really useful modifier for making these animations is the Wave modifier. In fact, looking through all of the gifs in 2020 by that artist (Vince McKelvie) it looks like he makes extensive use of this modifier. I love how simple it is to get distorted objects without much effort.

The one thing I’ve always found difficult is making the looping of the waves seamless. I haven’t seen many tutorials on achieving this, and those that I have found rely a bit on guesswork, which isn’t ideal. So, I set out to understand this modifier. After a lot of trial and error and “maths” I finally consulted the documentation and started to figure it out! The documentation on this modifier is quite good but here’s my alternative explanation which may help those who think in a similar way to me.

To get your wave lasting a specific duration, first you need to know how long you want your animation to last. For this example I set mine to 50 frames.

You then need to decide on the Width of the waves. The smaller the number the more ripples you’ll have on your object. This value is relative to the object. So, if you set it to 0.10 you’ll have 10 ripples through your object. If you set it to 1 you’ll have one ripple. I’ve set mine to 0.25.

For the Speed you need to do a bit of maths. Copy the value of Width (0.25) and in the Speed argument enter: (0.25*2)/50. Replace 0.25 with whatever value you set for Width and 50 with however long your wave animation lasts before it loops. Another way to represent this would be:

Speed = ($width*2)/$animationlength

The animation loops however the waves don’t affect the whole object. This is because we need to add a negative offset so that the wave starts before the animation is visible. This is where we need more maths! Enter this into the Offset value:

((1/0.25)*50)*-1

The first part, 1/025, is to work out how many times we’d need to repeat the Width before the whole object has ripples throughout it. We multiply by 50 as that is the animation duration. Then, we multiply by -1 to get the inverse, which because the offset. Another way to represent this would be:

Offset = ((1/$width)*$animationlength)*-1

And now the whole object has waves through it and loops seamlessly!

Ta da!

Since I originally made the gifs I have found that there are alternative methods for achieving a wavy object which rely on the displacement node and Displacement modifier or the Lattice or Cast modifier. These solutions have much more documentation but I’m glad I spent the time figuring out the Wave modifier.

Overlaying multiple textures in Blender

In 2019 I made an internet artwork for Fermynwood’s programme Toggler.

For this work I decided to use a similar aesthetic and process to Visually Similar. I talked a little bit about the process behind Visually Similar in a June’s Development Update. The node tree to overlay each of the transparent textures looked a bit like this.

Click to embiggen

When trying to do the same with the Toggler artwork I came across something weird that meant some textures just weren’t showing. So I decided to ask on Stack Exchange and Reddit why this might be the case.

Click to embiggen

It looks like I wasn’t using the alpha channels properly and didn’t need to use the Add math node, or just needed to use it properly. If were to apply the same process retrospectively to Visually Similar the artwork would look like this.

Curiously several of the textures didn’t show up. I suspect that doing it this “proper” way revealed that I had the order of the nodes incorrect. If I show the work again I’m might edit it so it looks “right”, but in this case the mistakes yielded a more preferable result.

</2019>

Back at it again with the Year in Review blog post! As suggested in 2018 this year I have been focusing more on my own independent projects and building my practice. Here’s some of the highlights.

January

Started January with my first visit to ICLC which was held in Madrid. Myself and Olivia Jack led a meetup and discussion for Visualists. I compiled some of the notes from that meetup here. and then I joined Class Compliant Audio Interfaces for a performance.

In January I also publicly announced my departure from Vivid Projects. I had been in this role in one fashion or another since 2010. I enjoyed so much of it and learned a tonne but decided I needed to focus on my own work.

I made some animations for Plasma Bears, which is a “collectible crafting and questing game”.

data.set, which was originally commissioned by Open Data Institute for the Thinking Out Loud exhibition, was part of the Forward exhibition at Ikon Gallery/Medicine Bakery. I did an interview with Ikon Gallery about my thoughts on being on artist in Birmingham

Forward: New Art from Birmingham

I also started mentoring for Random String again. You can read about one of my mentee’s progress here

February

Despite being relatively new to live coding music I took part in the Toplap 15th Anniversary [Live] Stream.

It was scary performing to the whole internet but it was fun!

I started working with Barbican again on a series of workshops as part of their Life Rewired season. To help launch their new season I performed with Emma Winston/Deerful at their launch night.

Life Rewired Launch – Young Barbican Nights

Also in February an Algoave documentary produced by Edited Arts was published on Resident Advisor.

March

The biggest event of March was most definitely going to SXSW in Texas to present Algorave!

Lush Presents Algorave: Live Coding Party

The Algorave featured ALGOBABEZ, Alexandra Cardenas, Belisha Beacon, co34pt, Coral Manton, hellocatfood (that’s meee), Scorpion Mouse and Renick Bell.

Many thanks to Joanne Armitage who took the lead on planning this and to British Underground and Lush for the support.

We did an interview with the SXSW magazine to promote our events there.

Quite soon after landing back in the UK I created new work for the V&A’s Friday Late event Copy / Paste.

Friday Late - Copy / Paste - March 2019

The two video works, called Visually Similar, look at how a false narrative can be created through images found on the internet. I wrote a bit about the process of making this work in June’s Development Update.

I also became an Artist Adviser for Jerwood. I was already familiar with the gallery as I had exhibited with them in 2016 as part of Common Property. It was an honour to be invited back to have a role in shape how they fund the arts.

Also at the beginning of the month I curated the opening of Black Hole Club, which was my final event for Vivid Projects/Black Hole Club.

April

As if SXSW wasn’t exciting enough in April I performed at an Algorave at the British Library.

It was one of the more unconventional places I’ve played but still highly enjoyable.

Later that month I performed at the Afrotech Fest opening party and the artwork I made for Fermynwood’s programme Toggler went online. I was also interviewed by Lynae Cook whilst at SXSW for her podcast BTS. In April the interview went online.

May

In early May the Time Portals exhibition at Furtherfield opened its doors. I collaborated with Studio Hyte to create a billboard which could be scanned to reveal an augmented reality artwork.

Time Portals: Antonio Roberts

There’s an overview video featuring all of the artists including me and Arjun Harrison-Mann from Studio Hyte.

AlgoMech was back for 2019 and I was present to perform with CCAI, do a solo music performance, and also to exhibit in their exhibition Patterns of Movement.

Patterns of Movement

I exhibited a video and print work called A Perfect Circle in which I captured the movement of trying to draw shapes. Quite a departure from my usual work but I liked the performative nature of it. I wrote a bit about the technical challenges of making it here.

Elsewhere two articles about Algorave were published, one in Riffs and another in The Times.

June

One of my biggest exhibitions of the year was the group show Wonder curated by Rachel Marsden

Wonder

Wonder

For my work in this exhibition I continued with my critique of Disney for being a company to negatively impact copyright laws. I also created a slightly sinister wonderland (video will be online some time in 2020). You can take a virtual tour of the exhibition through this youtube video or page on the Google Arts and Culture website.

I was back in Manchester to do a presentation and some mentoring for Manchester International Festival’s Creative Lab programme


(that video features some of my very early live coding music!)

I also returned to regular blogging with a series called Development Updates. Through this series I want to demystify the “magic” of creating digital art and show that there’s still a lot of problem solving, hacking, and messiness that go into creating a “finished” artwork or exhibition. Follow the development-update tag to see all of them.

Elsewhere I revamped the Proxy Pavilions artworks for the Vague but Exciting exhibition at Vivid Projects and was on Matthew Evans’ podcast sharing some of the songs that influence me and talking about being an artist in Birmingham. I also played a huuuuge Algorave at Corsica Studios. To prepare for this I started live streaming my rehearsals.

July

I headed out to the city of Nevers, which is not far from Paris, to take part in NØ SCHOOL NEVERS as one of their teachers. It was definitely a school but kinda like one without textbooks or lesson plans. We all learnt from each other and explored some really experimental stuff, like Daniel Temkin’s esoteric programming languages which use folders as its input!

NØ SCHOOL NEVERS

After a busy first half of the year is was really nice to spend a week with like-minded people learning about art and tech, eating great food and occasionally relaxing on a beach 🙂 I feeling a strong eight to light nine on this experience.

In July it was also announced that I had joined a-n Artists Council

Click to embiggen. Photo by Joel Chester Fildes

You may remember that I had run one of their Assembly events in June 2018. I’m really happy to be part of this group and hope to bring my perspective of being in the West Midlands and as a digital artist.

August

As part of the Wonder exhibition I organised an Algorave at The Herbert, which also happened to be Coventry’s first Algorave! I invited Lucy aka Heavy Lifting, Innocent, Carol Breen, and newcomer Maria Witek who I collaborated with on music.

Algorave Coventry

It was for sure one of my favourite Algoraves! The staff at The Herbert were lovely and prepared the venue and equipment perfectly, the performers were ace and the crowd brought great energy.

Fellow Visualist Rumblesan released his live coding software Improviz in July. Think of it a bit like LiveCodeLab but it’s on the desktop and you can use your own image and gif textures. For the occasion he commissioned me to make some gifs that would come preloaded with the software.

In August I made the Blender files available to the public for y’all to experiment with. I definitely think you should try Improviz out!

September

September started with me being featured in a BBC Radio 4 documentary about copyright and the relationship between artists and brands/corporations.

Art of Now – Sell Out featured myself and artists including Nan Goldin, and Gary Hume each giving our thoughts on brands and art. It’s still online so go listen.

Also at the beginning of the month the Bcc: exhibition opened at Vivid Projects. I’d previously taken part in the online version of this in 2018 and for the IRL exhibition I acted as Producer. It was a technically challenging exhibition to install which I wrote about in three Development Updates in December.

Bcc:

Bcc:

It made me really happy to see so much digital art being exhibited in Birmingham and was great to meet the Editor of Decoy Magazine, Lauren Marsden, IRL.

Elsewhere I was a judge for the Digital Art category for Koestler Arts’ exhibition Another Me which took place at Southbank. I sadly didn’t get to see the exhibition in person but it was inspiring to see the work coming from people in prisons.

I organised an Algorave for Llawn in Llandudno and then I exhibited a rather odd artwork for the Odds exhibition at TOMA in Southend-on-Sea.

Odds

I exhibited a video showing me attempting to compile Blender, as a way show that sometimes making digital art involves a lot of waiting!

Aaaaaand Ian Davies photographed myself and Emily Jones as part of his Brum Creatives project.

October

October was quiet-ish. I did two Algoraves in two countries in 18 hours! The first was at OHM in Berlin and was organised by Renick Bell. I then made my way to Walthamstow to do an “Algowave“, which was basically a more ambient rave. Radical Art Review did a feature on the event.

I made a rerecording of the performance and put it on Soundcloud:

Following on from the Assembly events in 2018 the organiser myself, Thomas Goddard (organiser of the Cardiff event), and Joanna Helfer (organiser of the Dundee event) embarked on a week-long journey to each of our respective cities to check out the art scene and reflect on how arts organisations were responding to the challenges they faced. It was a tiring but very inspiring week.

a-n bursary - Birmingham, Cardiff, Glasgow

Also in October I was commissioned to make some work for the Shakespeare Birthplace Trust. It’s on view in Stratford until October 2020 so go see it!

Will's Kitchen Artistic Commissions - Abundant Antiques

Finally, I was on the selection panel for the Collaborate exhibition at Jerwood Arts and in October the exhibition opened.

November

By far the biggest event of the November was my solo exhibition, We Are Your Friends, which took place at Czurles Nelson Gallery in Buffalo, NY.

We Are Your Friends

We Are Your Friends

We Are Your Friends

It’s my second solo exhibition and the first time I made a multichannel video. I had a really great time, which inluded a trip to Niagara Falls. Many thanks to Brent Patterson for working so hard ot make it happen.

Not even two days after landing back in the UK I was in Berlin to take part in Right the Right at Haus der Kulturen der Welt. The festival explored “Ideas for Music, Copyright and Access”. My video Unauthorised Copy was on show throughout the exhibition, I performed at an Algorave and I was in conversation with Beijing-based musician Howie Lee. You can listen to our conversation below and watch the video recording here.

Also in November I Am Sitting in a Room was exhibited at Gamerz festival in Aix-en-Provence in France. That piece is nearly ten years old!

December

As usual December was very quiet, and it was much needed after being away from home for nearly a month in November. I didn’t exhibit anything but I did use this month to prepare for stuff happening in 2020. It’s been my first year being completely freelance and I think it’s gone really well! My plans for next year are to do much of the same but also look into working with/for a gallery, and maybe even a slight career change. More on that as it happens. 2019 was ace. Thanks to everyone who helped make it great!

Installing Bcc: at Vivid Projects part 3

In this final part of this three-part series I’ll be going over installing Xuan Ye‘s work in the Bcc exhibition. This work posed a similar challenge to Scott Benesiinaabandan’s work. I needed to automatically load a web page except this time I needed to allow for user interaction via the mouse and keyboard.

The artwork isn’t online so I’ll again go over the basic premise. A web page is loaded that features a tiled graphic with faded captcha text on top of it. The user is asked to input the text and upon doing so is presented with a new tiled background image and new captcha. This process is repeated until the user decides to stop.

Bcc:

I could have installed this artwork on a Raspberry Pi but thankfully I had access to a spare Leneovo ThinkPad T420 laptop, which negated the need for me to buy a keyboard and screen (#win). The laptop is a refurbished model from 2011 and was running Windows 7 when I got it. It is possibly powerful enough to handle a full installation of Ubuntu but I didn’t want to risk it running slowly so instead I installed Lubuntu, which is basically a lightweight version of Ubuntu.

As I had installed Scott’s work I already knew how to automate the loading of a webpage and how to reopen it should it be closed. The main problem was how to restrict the user and keep the user from deviating from the artwork. Figuring this out became a cat and mouse game and was never 100% solved.

Whilst in kiosk mode in Chromium pretty much all of the keyboard shortcuts can be used. This means that a moderately tech-savvy user could press Ctrl + T to open a new tab, Ctrl + O to open a file, Ctrl + W close the browser tab, Alt + F4/Ctrl + Q to quit the browser or basically any other shortcut to deviate from the artwork. Not ideal!

Bcc:

My first thought was to try and disable these shortcuts within Chromimum. As far as I could tell at the time there wasn’t any option to change keyboard shortcuts. There must be usability or security reasons for this but in this situation it sucks. After a bit of searching I found the Shortkeys extension which allows for remapping of commands from a nice gui 🙂 Only one problem. I tried to remap/disable the Ctrl + T command and got this error.


More information here.

Drats! I tried its suggestion and it still didn’t work. Double drats! Eventually I realised that even if did disable some Chromium-specific shortcuts there were still system-wide ones which would still work. Depending on your operating system Ctrl + Q/W will always close a window or quit a program, as will Alt + F4, Super/Windows + D will show the desktop, and Super/Windows + E/Shift + E will open the Home folder. I needed to disable these system-wide.

LXQT has a gui for editing keyboard shortcuts. Whilst it doesn’t allow for completely removing a shortcut, it does allow a user to remap them.

As you can see from the screenshot above I “disabled” some common shortcuts by making them execute, well, nothing! Actually it runs “;”, but still that has the effect of disabling it. Huzzah! But what about the other keyboard shortcuts, I hear you ask. Well, this is where I rely on the ignorance of the users. Y’see, as much as it is used within Android phones and basically most web servers, Linux/Ubuntu is still used by a relatively small amount of people. Even smaller is the amount of people using Lubuntu or another LXQT-based Linux distribution. And even smaller is the amount that work in the arts, in Birmingham, and would be at Vivid Projects during three weeks in September, and knew how I installed the work, and… I think you get my point.

During the exhibition anyone could have pressed Ctrl + Shift + T to open a terminal, run killall bcc.sh to kill the script that reopens Chromium, undo the shortcut remappings and then played Minecraft. I was just counting on the fact that few would know how to and few would have a reason to. After all there was some really great art on the screens!

After the exhibition was installed Jessica Rose suggested that one simple solution would have been to disable the Ctrl key. It’s extreme but technically it would have worked at stopping users from getting up to mischief. It would have had the negative effect of preventing me, an administrator, from using the computer to, for example, fix any errors. The solution I implemented, whilst not bullet proof, worked.

That’s the end of December’s Development Updates. Installing Bcc was frustrating at times but did push me to think more about how people interact with technology in a gallery installation setting. It’s never just a case of buying expensive hardware and putting it in front of people. There needs to be processes – either hardware or software based – that protect the public and the artwork. It doesn’t help when lots of technology is built to be experienced/used by one user at a time (it’s called a PC (personal computer) for a reason y’all). Change is no doubt to make it more about groups and collaboration but, y’know, it’ll take time.

Installing Bcc: at Vivid Projects part 2

The next artwork that was challenging to install was Monuments: Psychic Landscapes by Scott Benesiinaabandan.

Bcc:

I won’t be showing the full artwork as all of the artworks were exclusive to Bcc: and it’s up to the artists whether they show it or not. On a visual level the basic premise of the artwork is that the viewer visits a web page which loads an artwork in the form of a Processing sketch. There is a statue in the centre which becomes obscured by lots of abstract shapes over time whilst an ambient soundtrack plays in the background. At whatever point the viewer chooses they can refresh the screen to clear all of the shapes, once again revealing the statue.

On a technical level the artwork isn’t actually that difficult to install. All that needs doing is opening the web page. The difficult part is controlling user interaction.

If you’ve ever been to an exhibition with digital screen-based artworks which allow user interaction via a mouse, keyboard or even touch screen then you’ve probably seen those same screens not functioning as intended. People always find a way to exist the installation and reveal the desktop or, worse yet, launch a different program or website. So, the choice was made very early on to automate the user interaction in this artwork. After all, aside from loading the artwork, the only user interaction needed was to press F5 to refresh the page. How hard could it be?

Well, it’s very hard to do. Displaying the artwork required two main steps:

  • Launch the web page
  • Refresh the artwork after x seconds

Launch a web page

Launching a specific web page on startup is a relatively easy task. Raspbian by default comes bundled with Chromium so I decided to use this browser (more on that later). The Chromium Man Page says that in order to launch a webpage you just need to run chromium-browser http://example.com. Simple! There’s lots of ways to run a command automatically once a Raspberry Pi is turned on but I settled on this answer and placed a script on the Desktop, made it executable (chmod +x script.sh), and in ~/.config/lxsession/LXDE-pi/autostart I added the line @sh /home/pi/Desktop/script_1.sh. At this stage the script simply was:

#!/bin/bash

while true ; do chromium-browse --noerrdialogs --kiosk --app=http://example.com ; done

I’ll break it down in reverse order. --kiosk launches the browser but in full screen and without the address bar and other decorations. A user can still open/close tabs but since there’s no keyboard interaction this doesn’t matter. --noerrdialogs prevents error dialogs from appearing. In my case the one that kept appearing was the Restore Pages dialog that appears if you don’t shut down Chrome properly. Useful in many cases, but since there’s no keyboard I don’t want this appearing.

I wrapped all of this in a while true loop to safeguard against mischievous people who somehow manage to hack their way into the Raspberry Pi (ssh was disabled), or if Chromium shuts down for some reason. It’s basically checking to see if Chromium is open and if it isn’t it launches it. This will become very important for the next step

Refresh a web page

This is surprisingly difficult to achieve! As mentioned before, this piece requires a user to refresh the page at whatever point they desire. As we were automating this we decided that we wanted a refresh every five minutes.

Unfortunately Chromium doesn’t have any options for automatic refreshing of a web page. There are lots of free plugins that offer automatic refreshing. However, at the time that I tried them they all need to be manually activated. I couldn’t just set it and forget it. It could be argued that asking a gallery assistant to press on a button to activate the auto refreshing isn’t too taxing a task. However, automating ensures that it will always definitely be done.

At this point I looked at other browsers. Midori is lightweight enough to be installed on a Raspberry Pi. It has options to launch a web page from the command line and, according to this Stackexchange answer it has had the option since at least 2014 to refresh a web page using the -i or --inactivity-reset= option. However, I tried this and it just wasn’t working. I don’t know why and couldn’t find any bug reports about it.

It was at this point that I unleashed the most inelegant, hacky, don’t-judge-me-on-my-code-judge-me-on-my-results, horrible solution ever. What if instead of refreshing the browser tab I refreshed the browser itself i.e. close and reopen the browser? I already had a while true loop to reopen it if it closed so all I needed was another command or script that would send the killall command to Chromium after a specific amount of time (five minutes). I created another script with this as its contents:

#!/bin/bash

while true ; do sleep 300 ; killall chromium-browser ; done

The sleep command makes the script wait 300 seconds (five minutes) before proceeding onto the next part, which is to kill (close) chromimum-browser. And, by wrapping it in a while-true loop it’ll do this until the end of eternity the exhibition. Since implementing this I noticed a similar answer on the Stackoverflow site which puts both commands in a single file.

And there you have it. To refresh a web page I basically have to kill it every 300 seconds. More violent than it needs to be!