<2020>

</2019>

Back at it again with the Year in Review blog post! As suggested in 2018 this year I have been focusing more on my own independent projects and building my practice. Here’s some of the highlights.

January

Started January with my first visit to ICLC which was held in Madrid. Myself and Olivia Jack led a meetup and discussion for Visualists. I compiled some of the notes from that meetup here. and then I joined Class Compliant Audio Interfaces for a performance.

In January I also publicly announced my departure from Vivid Projects. I had been in this role in one fashion or another since 2010. I enjoyed so much of it and learned a tonne but decided I needed to focus on my own work.

I made some animations for Plasma Bears, which is a “collectible crafting and questing game”.

data.set, which was originally commissioned by Open Data Institute for the Thinking Out Loud exhibition, was part of the Forward exhibition at Ikon Gallery/Medicine Bakery. I did an interview with Ikon Gallery about my thoughts on being on artist in Birmingham

Forward: New Art from Birmingham

I also started mentoring for Random String again. You can read about one of my mentee’s progress here

February

Despite being relatively new to live coding music I took part in the Toplap 15th Anniversary [Live] Stream.

It was scary performing to the whole internet but it was fun!

I started working with Barbican again on a series of workshops as part of their Life Rewired season. To help launch their new season I performed with Emma Winston/Deerful at their launch night.

Life Rewired Launch – Young Barbican Nights

Also in February an Algoave documentary produced by Edited Arts was published on Resident Advisor.

March

The biggest event of March was most definitely going to SXSW in Texas to present Algorave!

Lush Presents Algorave: Live Coding Party

The Algorave featured ALGOBABEZ, Alexandra Cardenas, Belisha Beacon, co34pt, Coral Manton, hellocatfood (that’s meee), Scorpion Mouse and Renick Bell.

Many thanks to Joanne Armitage who took the lead on planning this and to British Underground and Lush for the support.

We did an interview with the SXSW magazine to promote our events there.

Quite soon after landing back in the UK I created new work for the V&A’s Friday Late event Copy / Paste.

Friday Late - Copy / Paste - March 2019

The two video works, called Visually Similar, look at how a false narrative can be created through images found on the internet. I wrote a bit about the process of making this work in June’s Development Update.

I also became an Artist Adviser for Jerwood. I was already familiar with the gallery as I had exhibited with them in 2016 as part of Common Property. It was an honour to be invited back to have a role in shape how they fund the arts.

Also at the beginning of the month I curated the opening of Black Hole Club, which was my final event for Vivid Projects/Black Hole Club.

April

As if SXSW wasn’t exciting enough in April I performed at an Algorave at the British Library.

It was one of the more unconventional places I’ve played but still highly enjoyable.

Later that month I performed at the Afrotech Fest opening party and the artwork I made for Fermynwood’s programme Toggler went online. I was also interviewed by Lynae Cook whilst at SXSW for her podcast BTS. In April the interview went online.

May

In early May the Time Portals exhibition at Furtherfield opened its doors. I collaborated with Studio Hyte to create a billboard which could be scanned to reveal an augmented reality artwork.

Time Portals: Antonio Roberts

There’s an overview video featuring all of the artists including me and Arjun Harrison-Mann from Studio Hyte.

AlgoMech was back for 2019 and I was present to perform with CCAI, do a solo music performance, and also to exhibit in their exhibition Patterns of Movement.

Patterns of Movement

I exhibited a video and print work called A Perfect Circle in which I captured the movement of trying to draw shapes. Quite a departure from my usual work but I liked the performative nature of it. I wrote a bit about the technical challenges of making it here.

Elsewhere two articles about Algorave were published, one in Riffs and another in The Times.

June

One of my biggest exhibitions of the year was the group show Wonder curated by Rachel Marsden

Wonder

Wonder

For my work in this exhibition I continued with my critique of Disney for being a company to negatively impact copyright laws. I also created a slightly sinister wonderland (video will be online some time in 2020). You can take a virtual tour of the exhibition through this youtube video or page on the Google Arts and Culture website.

I was back in Manchester to do a presentation and some mentoring for Manchester International Festival’s Creative Lab programme


(that video features some of my very early live coding music!)

I also returned to regular blogging with a series called Development Updates. Through this series I want to demystify the “magic” of creating digital art and show that there’s still a lot of problem solving, hacking, and messiness that go into creating a “finished” artwork or exhibition. Follow the development-update tag to see all of them.

Elsewhere I revamped the Proxy Pavilions artworks for the Vague but Exciting exhibition at Vivid Projects and was on Matthew Evans’ podcast sharing some of the songs that influence me and talking about being an artist in Birmingham. I also played a huuuuge Algorave at Corsica Studios. To prepare for this I started live streaming my rehearsals.

July

I headed out to the city of Nevers, which is not far from Paris, to take part in NØ SCHOOL NEVERS as one of their teachers. It was definitely a school but kinda like one without textbooks or lesson plans. We all learnt from each other and explored some really experimental stuff, like Daniel Temkin’s esoteric programming languages which use folders as its input!

NØ SCHOOL NEVERS

After a busy first half of the year is was really nice to spend a week with like-minded people learning about art and tech, eating great food and occasionally relaxing on a beach 🙂 I feeling a strong eight to light nine on this experience.

In July it was also announced that I had joined a-n Artists Council

Click to embiggen. Photo by Joel Chester Fildes

You may remember that I had run one of their Assembly events in June 2018. I’m really happy to be part of this group and hope to bring my perspective of being in the West Midlands and as a digital artist.

August

As part of the Wonder exhibition I organised an Algorave at The Herbert, which also happened to be Coventry’s first Algorave! I invited Lucy aka Heavy Lifting, Innocent, Carol Breen, and newcomer Maria Witek who I collaborated with on music.

Algorave Coventry

It was for sure one of my favourite Algoraves! The staff at The Herbert were lovely and prepared the venue and equipment perfectly, the performers were ace and the crowd brought great energy.

Fellow Visualist Rumblesan released his live coding software Improviz in July. Think of it a bit like LiveCodeLab but it’s on the desktop and you can use your own image and gif textures. For the occasion he commissioned me to make some gifs that would come preloaded with the software.

In August I made the Blender files available to the public for y’all to experiment with. I definitely think you should try Improviz out!

September

September started with me being featured in a BBC Radio 4 documentary about copyright and the relationship between artists and brands/corporations.

Art of Now – Sell Out featured myself and artists including Nan Goldin, and Gary Hume each giving our thoughts on brands and art. It’s still online so go listen.

Also at the beginning of the month the Bcc: exhibition opened at Vivid Projects. I’d previously taken part in the online version of this in 2018 and for the IRL exhibition I acted as Producer. It was a technically challenging exhibition to install which I wrote about in three Development Updates in December.

Bcc:

Bcc:

It made me really happy to see so much digital art being exhibited in Birmingham and was great to meet the Editor of Decoy Magazine, Lauren Marsden, IRL.

Elsewhere I was a judge for the Digital Art category for Koestler Arts’ exhibition Another Me which took place at Southbank. I sadly didn’t get to see the exhibition in person but it was inspiring to see the work coming from people in prisons.

I organised an Algorave for Llawn in Llandudno and then I exhibited a rather odd artwork for the Odds exhibition at TOMA in Southend-on-Sea.

Odds

I exhibited a video showing me attempting to compile Blender, as a way show that sometimes making digital art involves a lot of waiting!

Aaaaaand Ian Davies photographed myself and Emily Jones as part of his Brum Creatives project.

October

October was quiet-ish. I did two Algoraves in two countries in 18 hours! The first was at OHM in Berlin and was organised by Renick Bell. I then made my way to Walthamstow to do an “Algowave“, which was basically a more ambient rave. Radical Art Review did a feature on the event.

I made a rerecording of the performance and put it on Soundcloud:

Following on from the Assembly events in 2018 the organiser myself, Thomas Goddard (organiser of the Cardiff event), and Joanna Helfer (organiser of the Dundee event) embarked on a week-long journey to each of our respective cities to check out the art scene and reflect on how arts organisations were responding to the challenges they faced. It was a tiring but very inspiring week.

a-n bursary - Birmingham, Cardiff, Glasgow

Also in October I was commissioned to make some work for the Shakespeare Birthplace Trust. It’s on view in Stratford until October 2020 so go see it!

Will's Kitchen Artistic Commissions - Abundant Antiques

Finally, I was on the selection panel for the Collaborate exhibition at Jerwood Arts and in October the exhibition opened.

November

By far the biggest event of the November was my solo exhibition, We Are Your Friends, which took place at Czurles Nelson Gallery in Buffalo, NY.

We Are Your Friends

We Are Your Friends

We Are Your Friends

It’s my second solo exhibition and the first time I made a multichannel video. I had a really great time, which inluded a trip to Niagara Falls. Many thanks to Brent Patterson for working so hard ot make it happen.

Not even two days after landing back in the UK I was in Berlin to take part in Right the Right at Haus der Kulturen der Welt. The festival explored “Ideas for Music, Copyright and Access”. My video Unauthorised Copy was on show throughout the exhibition, I performed at an Algorave and I was in conversation with Beijing-based musician Howie Lee. You can listen to our conversation below.

Also in November I Am Sitting in a Room was exhibited at Gamerz festival in Aix-en-Provence in France. That piece is nearly ten years old!

December

As usual December was very quiet, and it was much needed after being away from home for nearly a month in November. I didn’t exhibit anything but I did use this month to prepare for stuff happening in 2020. It’s been my first year being completely freelance and I think it’s gone really well! My plans for next year are to do much of the same but also look into working with/for a gallery, and maybe even a slight career change. More on that as it happens. 2019 was ace. Thanks to everyone who helped make it great!

Addictions and Habits

Bcc:, Decoy Magazine’s monthly e-mail subscription programme, ended in 2019. I had made an exclusive artwork for it back in 2018 that was only available to people who subscribed to it, and then in September 2019 at the IRL exhibition at Vivid Projects. If y’all didn’t catch that show here’s my work below:

When you identify something toxic in your life you recoil from it, only to be drawn back in again and again. Addictions and Habits is inspired by how technologies built on the idea of enriching our lives have only amplified our anxieties and made us more physically and emotionally vulnerable

Here’s the really nice essay from Lauren Marsden which accompanied the release of the artwork:

This month, we are very honoured to be featuring UK-based artist and curator Antonio Roberts. With an extensive body of work that entangles glitch, appropriation, sculpture, screens, digitalia, and interaction, he is well suited for the task of questioning and confronting the limitations of copyright law and the corporate appropriation of cultural aesthetics and technologies. Here, with Addictions and Habits, we can imagine either side of the issue. For one, the hand of the creator that opens itself freely to the gesture of sharing, remixing, re-circulating (ad infinitum), and then, perhaps, the other hand—the one that closes the deal, signs the cheque, gives a comforting pat on the back, or plucks an idea out of the ether to secure its containment and regulation. Within this paradox, we enjoy the exuberance of Antonio’s work and see a space for liberation among his many fragments and shatters.

Thanks to Lauren Marsden for including me in Bcc: 🙂

Development Update – December 2019 part 3

In this final part of this three-part Development Update I’ll be going over installing Xuan Ye‘s work in the Bcc exhibition. This work posed a similar challenge to Scott Benesiinaabandan’s work. I needed to automatically load a web page except this time I needed to allow for user interaction via the mouse and keyboard.

The artwork isn’t online so I’ll again go over the basic premise. A web page is loaded that features a tiled graphic with faded captcha text on top of it. The user is asked to input the text and upon doing so is presented with a new tiled background image and new captcha. This process is repeated until the user decides to stop.

Bcc:

I could have installed this artwork on a Raspberry Pi but thankfully I had access to a spare Leneovo ThinkPad T420 laptop, which negated the need for me to buy a keyboard and screen (#win). The laptop is a refurbished model from 2011 and was running Windows 7 when I got it. It is possibly powerful enough to handle a full installation of Ubuntu but I didn’t want to risk it running slowly so instead I installed Lubuntu, which is basically a lightweight version of Ubuntu.

As I had installed Scott’s work I already knew how to automate the loading of a webpage and how to reopen it should it be closed. The main problem was how to restrict the user and keep the user from deviating from the artwork. Figuring this out became a cat and mouse game and was never 100% solved.

Whilst in kiosk mode in Chromium pretty much all of the keyboard shortcuts can be used. This means that a moderately tech-savvy user could press Ctrl + T to open a new tab, Ctrl + O to open a file, Ctrl + W close the browser tab, Alt + F4/Ctrl + Q to quit the browser or basically any other shortcut to deviate from the artwork. Not ideal!

Bcc:

My first thought was to try and disable these shortcuts within Chromimum. As far as I could tell at the time there wasn’t any option to change keyboard shortcuts. There must be usability or security reasons for this but in this situation it sucks. After a bit of searching I found the Shortkeys extension which allows for remapping of commands from a nice gui 🙂 Only one problem. I tried to remap/disable the Ctrl + T command and got this error.


More information here.

Drats! I tried its suggestion and it still didn’t work. Double drats! Eventually I realised that even if did disable some Chromium-specific shortcuts there were still system-wide ones which would still work. Depending on your operating system Ctrl + Q/W will always close a window or quit a program, as will Alt + F4, Super/Windows + D will show the desktop, and Super/Windows + E/Shift + E will open the Home folder. I needed to disable these system-wide.

LXQT has a gui for editing keyboard shortcuts. Whilst it doesn’t allow for completely removing a shortcut, it does allow a user to remap them.

As you can see from the screenshot above I “disabled” some common shortcuts by making them execute, well, nothing! Actually it runs “;”, but still that has the effect of disabling it. Huzzah! But what about the other keyboard shortcuts, I hear you ask. Well, this is where I rely on the ignorance of the users. Y’see, as much as it is used within Android phones and basically most web servers, Linux/Ubuntu is still used by a relatively small amount of people. Even smaller is the amount of people using Lubuntu or another LXQT-based Linux distribution. And even smaller is the amount that work in the arts, in Birmingham, and would be at Vivid Projects during three weeks in September, and knew how I installed the work, and… I think you get my point.

During the exhibition anyone could have pressed Ctrl + Shift + T to open a terminal, run killall bcc.sh to kill the script that reopens Chromium, undo the shortcut remappings and then played Minecraft. I was just counting on the fact that few would know how to and few would have a reason to. After all there was some really great art on the screens!

After the exhibition was installed Jessica Rose suggested that one simple solution would have been to disable the Ctrl key. It’s extreme but technically it would have worked at stopping users from getting up to mischief. It would have had the negative effect of preventing me, an administrator, from using the computer to, for example, fix any errors. The solution I implemented, whilst not bullet proof, worked.

That’s the end of December’s Development Updates. Installing Bcc was frustrating at times but did push me to think more about how people interact with technology in a gallery installation setting. It’s never just a case of buying expensive hardware and putting it in front of people. There needs to be processes – either hardware or software based – that protect the public and the artwork. It doesn’t help when lots of technology is built to be experienced/used by one user at a time (it’s called a PC (personal computer) for a reason y’all). Change is no doubt to make it more about groups and collaboration but, y’know, it’ll take time.

Development Update – December 2019 part 2

The next artwork that was challenging to install was Monuments: Psychic Landscapes by Scott Benesiinaabandan.

Bcc:

I won’t be showing the full artwork as all of the artworks were exclusive to Bcc: and it’s up to the artists whether they show it or not. On a visual level the basic premise of the artwork is that the viewer visits a web page which loads an artwork in the form of a Processing sketch. There is a statue in the centre which becomes obscured by lots of abstract shapes over time whilst an ambient soundtrack plays in the background. At whatever point the viewer chooses they can refresh the screen to clear all of the shapes, once again revealing the statue.

On a technical level the artwork isn’t actually that difficult to install. All that needs doing is opening the web page. The difficult part is controlling user interaction.

If you’ve ever been to an exhibition with digital screen-based artworks which allow user interaction via a mouse, keyboard or even touch screen then you’ve probably seen those same screens not functioning as intended. People always find a way to exist the installation and reveal the desktop or, worse yet, launch a different program or website. So, the choice was made very early on to automate the user interaction in this artwork. After all, aside from loading the artwork, the only user interaction needed was to press F5 to refresh the page. How hard could it be?

Well, it’s very hard to do. Displaying the artwork required two main steps:

  • Launch the web page
  • Refresh the artwork after x seconds

Launch a web page

Launching a specific web page on startup is a relatively easy task. Raspbian by default comes bundled with Chromium so I decided to use this browser (more on that later). The Chromium Man Page says that in order to launch a webpage you just need to run chromium-browser http://example.com. Simple! There’s lots of ways to run a command automatically once a Raspberry Pi is turned on but I settled on this answer and placed a script on the Desktop, made it executable (chmod +x script.sh), and in ~/.config/lxsession/LXDE-pi/autostart I added the line @sh /home/pi/Desktop/script_1.sh. At this stage the script simply was:

#!/bin/bash

while true ; do chromium-browse --noerrdialogs --kiosk --app=http://example.com ; done

I’ll break it down in reverse order. --kiosk launches the browser but in full screen and without the address bar and other decorations. A user can still open/close tabs but since there’s no keyboard interaction this doesn’t matter. --noerrdialogs prevents error dialogs from appearing. In my case the one that kept appearing was the Restore Pages dialog that appears if you don’t shut down Chrome properly. Useful in many cases, but since there’s no keyboard I don’t want this appearing.

I wrapped all of this in a while true loop to safeguard against mischievous people who somehow manage to hack their way into the Raspberry Pi (ssh was disabled), or if Chromium shuts down for some reason. It’s basically checking to see if Chromium is open and if it isn’t it launches it. This will become very important for the next step

Refresh a web page

This is surprisingly difficult to achieve! As mentioned before, this piece requires a user to refresh the page at whatever point they desire. As we were automating this we decided that we wanted a refresh every five minutes.

Unfortunately Chromium doesn’t have any options for automatic refreshing of a web page. There are lots of free plugins that offer automatic refreshing. However, at the time that I tried them they all need to be manually activated. I couldn’t just set it and forget it. It could be argued that asking a gallery assistant to press on a button to activate the auto refreshing isn’t too taxing a task. However, automating ensures that it will always definitely be done.

At this point I looked at other browsers. Midori is lightweight enough to be installed on a Raspberry Pi. It has options to launch a web page from the command line and, according to this Stackexchange answer it has had the option since at least 2014 to refresh a web page using the -i or --inactivity-reset= option. However, I tried this and it just wasn’t working. I don’t know why and couldn’t find any bug reports about it.

It was at this point that I unleashed the most inelegant, hacky, don’t-judge-me-on-my-code-judge-me-on-my-results, horrible solution ever. What if instead of refreshing the browser tab I refreshed the browser itself i.e. close and reopen the browser? I already had a while true loop to reopen it if it closed so all I needed was another command or script that would send the killall command to Chromium after a specific amount of time (five minutes). I created another script with this as its contents:

#!/bin/bash

while true ; do sleep 300 ; killall chromium-browser ; done

The sleep command makes the script wait 300 seconds (five minutes) before proceeding onto the next part, which is to kill (close) chromimum-browser. And, by wrapping it in a while-true loop it’ll do this until the end of eternity the exhibition. Since implementing this I noticed a similar answer on the Stackoverflow site which puts both commands in a single file.

And there you have it. To refresh a web page I basically have to kill it every 300 seconds. More violent than it needs to be!

Development Update – December 2019 part 1

I took a bit of a break from writing the Development Updates. September was pretty busy with Bcc: (more on that below) and then I was completing a commission for Will’s Kitchen/The Shakespeare Birthplace Trust and preparing for my solo exhibition, We Are Your Friends.

With all of that now completed I’m writing a few posts about one project in particular: Bcc:

The Bcc: exhibition opened at Vivid Projects on Friday 6th September. It was a collaboration between Vancouver-based Decoy Magazine and Birmingham-based Vivid Projects. The exhibition featured a curated selection of works from Decoy Magazine’s online art subscription service called Bcc:. The basic premise is that each month you’d get specially commissioned art in your e-mail inbox.

Bcc:

Bcc:

After being part of Bcc: in 2018 I suggested to Lauren Marsden, the Curator and Editor of Decoy Magazine, that it could possibly become an IRL exhibition at Vivid Projects. At the time I was still working there so I worked on getting most things in place to get the exhibition going. Then I left in 2019. Because of my prior involvement in Bcc: and the massive technical challenge involved in installing the work (more on that later) I was asked to produce the exhibition.

Depending on how you look at it the technical aspect of installing the exhibition could be very simple. Most of the works in Bcc: were short movies and animations/gifs, and Vivid Projects has long used the Adafruit Raspberry Pi Video Looper to handle playing videos.

Some works, however, required more attention. There were some works that were interactive websites, some that were animated gifs and some that require additional hardware. Prior to the exhibition this probably didn’t present any problems as the works were viewed by most likely one person on their personal phone or computer. The challenge comes when it’s on a shared computer in a public environment. Additionally, operating the works needs to be as hands off as possible. That is, I didnt want it to be the case that myself or another technician had to be on hand every day to go through complicated procedures to turn on all of the work. They needed to be automatic. With 17 works each needing their own computer/Raspberry Pi there was a lot to prepare. Over the next few posts I’ll take you through some of the works and their technical challenges:

Playing gifs on a raspberry pi

Of the 17 works on show in the exhibition 10 were animated gifs. To stay true to the small nature of animated gifs (don’t get me started on the concept of HD gifs) we decided to display the gifs on the Official Raspberry Pi 7″ Touchscreen Display. This proved to be a really good decision overall. It required that visitors get really close to the works and spend time with a format that can sometimes be a bit throwaway.

Bcc:

As mentioned before, for a long time Vivid Projects has used the Adafruiit Raspberry Pi Video Looper software to play videos. It works (mostly) great with the exception that it doesn’t play animated gifs. The main underlying software, omxplayer, only supports video files. Even the supplied alternative player, hello_video, also only plays video files.

Your immediate though might be to just convert the animated gifs to video files. Whilst this “works” there is always the danger that in converting a file you reduce the quality of it. For an artist like Nicolas Sassoon, who makes pixel-perfect animations that match a specific screen size, this would be unacceptable. So I went on a journey to find a way to play gifs.

The requirements for the software is that it should operate in a similar way to the Adafruit software and play a gif on loop with little or no pause between loops. It should play in the frame buffer (i.e. without needing to load the desktop) and it should make use of the GPU (helps prevent screen tearing). And for a bonus it should be able to play a series of gifs one after the other. Simple, right?

TL;DR: There isn’t a reliable way, I had to convert to a video.

Some of the solutions I saw were saying to use Imagemagick to play the gifs. This wouldn’t work as I would need to launch the desktop. Then, I’d need to script it to go full screen, centre the gif, change the background to black etc.

FBI and FIM don’t support animated gifs, although they are useful if you ever want to play a slideshow of static images.

feh is another image viewer that uses the framebuffer. However, it also doesn’t support animated gifs and, according to this response from the author, this is by design.

This suggested solution of converting to images kinda works but doesn’t take into account if each animation frame has different durations (see this GIMP tutorial for example on how to use it). With that in mind for this to work I would need to get the duration of each frame in each of the 10 gifs, separate the gifs into their individual frames, and then tell feh to play each frame for it’s specified duration. So, this method could work but it would require a lot of work!

This thread on the Raspberry Pi forum did provide a possible solution which I didn’t try but it also pointed me to FBpyGIF, which was certainly the most promising of the solutions. However, a couple of problems prevent me from using it. Still very promising though!

Finally, I tried one of the various GIF Frames that play a folder of animated gifs on loop. Sounds like it works but there’s screen tearing on some fast-moving gifs. I’m guessing this is because it doesn’t have hardware acceleration and/or because it uses Chromium to play the gifs.

Soooooo after all of this I felt a bit defeated and I decided to just convert the animated gifs to videos. I used Handbrake and noticed no loss of quality in the conversion. Even if there was, on a 7-inch screen it’d be quite hard to see. Using the Adafruit player/omxplayer I was initially having some issues with aspect ratio. Even with –aspect-mode set to fill stretch or letterbox, the videos were being stretched to fill the screen. To illustrate take the following video, which is 1024×68/4:3.


(fyi it was made using Natron and this script to add in a timecode)

When play on the screen it is stretched to fill the screen.

The Raspberry Pi touch screen has a resolution of 800 x 480, which is a 5:3 aspect ratio. Most of the videos and animated gifs were HD/16:9 so would be letterboxed by default.

So I had the bright idea of padding each video so that it was exactly 800×480.

Now, the Adafruit player/omxplayer says it can play any video which is H.264 encoded but I’ve had some troubles in the past, so whenever I’m given a video I usually convert it using Handbrake with the Fast 1080p30 preset. These settings have always worked for me but for some reason on this occasion the video was stuttering a lot! What was strange was that the original videos (the animated gifs converted to videos without resizing) played fine. Even after they were run through Handbrake. Why when they were converted to 800×480 size did they stutter?

It was two days before the exhibition opening that I remembered that some time in 2016 I had an issue with omxplayer in that it didn’t play videos if the video didn’t have an audio track. Why? I don’t know. Maybe audio was the problem in this scenario too? It was worth a try and so I decided to disbale the audio track using the -n -1 option. This doesn’t just turn the audio down, it disable encoding of it. And guess what. IT WORKED!

I have absolutely no idea why this worked or why the error ocurred in the first place. Here’s the extra arguments that I included on line 107 of video_looper.ini.

extra_args = --no-osd --audio_fifo 0.01 --video_fifo 0.01 -n -1 --aspect-mode stretch

All of that just to play animated gifs! Now that I had the code perfected copying it to all of the other Raspberry Pi’s was simple. If the aforementioned softwares had animated gif playback by default then this would’ve been solved much quicker but for now it seems the most reliable way to play animated gifs on a loop on a Raspberry Pi is to convert them to video.

We Are Your Friends, 11th – 27th November

Happy to announce that my second solo exhibition, We Are Your Friends, will be taking place from 11th – 27th November at the Czurles Nelson Gallery at Buffalo State University, NY.

There’s a reception on 14th November 17:00 – 19:00 and I’ll be present in Buffalo from 11th – 17th. Come say hi! Many thanks to Brent Patterson for making this happen 🙂

Improviz gifs

Earlier this year fellow visualist and live coder Rumblesan commissioned me to make some gifs for his new live coding software, Improviz. In July he unleashed it into the world!

Looking at the above videos you could easily be forgiven for thinking that it looks a bit like LiveCodeLab. He is, after all, one of the developers of LiveCodeLab. However, Improviz differs in a few ways. As Rumblesan himself explains in the Toplap chat:

the language in Improviz has a lot in common with live code lab, and the basic functionality for shapes, styles, transformations and loops is all pretty much the same. but in terms of implementation and usage they’re very different

lcl is using three.js as an intermediary, whilst improviz is entirely haskell and uses opengl directly (which I think long term is going to cause me grief but we’ll see haha)

the major difference is that improviz lets you use images and gifs as textures, which is something I’d like to back port to lcl, but wouldn’t be a small task unfortunately

That’s right, you can load textures! As mentioned before Rumblesan commissioned me to make a set of gifs to go along with the initial public release. They’re all released under a Creative Commons Attribution licence so you’re free to use them as you wish as long as you attribute me.

As an added bonus I’m also releasing the .blend file that was used to make each one.

Click here to download the Blender files.

These were made using a beta version of Blender 2.80. I’ve tested them in the stable release and they appear to work fine but they definitely will not work in 2.79 or earlier versions. I’m providing these for you to explore and won’t be doing a writeup/tutorial on how they work. If you remix them please share what you make 🙂

Definitely give Improviz a try! Thanks to Rumblesan for commissioning me to make the gifs 🙂

Development Update – August 2019

What’s happening on Twitter

The following is compiled from a bunch of Tweets that I made in December 2018. After reading you’ll see why I have to write it here! While it is not directly related with programming or making art, it does help with Getting Things Done, so I decided to include it here.

Like many people I’ve started to remove myself from a lot of social media websites. First was Facebook in 2017. The reason for this is that was really annoyed that it was using nostalgia to manipulate me into staying on the website. In shoving 10 year-old photos into my view through the On This Day feature it was giving me little hits of dopamine by reminding me of the good ol’ times, even if they were 10 years ago with people that, for whatever reason, are no longer part of my life.

One solution to this was to make sure that Facebook only had recent information about me. I started manually deleting anything that was more than 2 year old. I eventually found a Chrome plugin (use at your own risk) that made it easier to do but this process was a chore that ultimately didn’t solve the fact that Facebook was the problem. After about a year I left unannounced. After deleting my account, of course.

My “relationship” with Twitter is a bit different. I’ve always preferred it over Facebook as it isn’t as intrusive, at least not directly. It doesn’t constantly ask you to share who you’re dating, identify your family, upload photos from your night out or tag your friends in everything. Instead it felt like it was more concerned with what was happening at that moment.

Like Facebook, though, I became a bit concerned with how much data about me it was storing. I started using the website in 2008 (Facebook in 2007) and have used it almost daily since then. Over that time I have grown and changed as a person many times over. I don’t want this history to be fully documented and, more importantly, available for anyone to browse through. Whilst the majority of the 40k tweets I accumulated over that period probably consists mostly of cat gifs, memes and the word “lol”, maybe there’s there events that I’d rather not have documented, like Tweets showing friendships and relationships falling apart, embarrassing photos of myself or others on nights out, or even just me saying something that was totally out of order.

I’m glad that I have friends (and enemies) that have called me out on my bullshit and hope that they continue to point out times when I do something wrong. However, I’d rather that the trail of data I leave on these sites that I use every day reflected me as I am now, not who I was 10 or even 20 years ago.

So, I went on a mission to find a way to keep my Tweets current. I needed a tool, or tools, that would automatically delete Tweets older than a certain time period.

A lot has been written about Tweetdelete. However, I don’t want to rely on a third party service. Many people do trust the service, but there’s always risks in using third party services, especially when they have access to a lot of your information. Then there’s the risk that it could one day shut down so I decided that I wanted something that I could deploy myself.

Deploying your own script requires that you register a developer account on Twitter.

Delete tweets is a Python script that let’s you delete tweets and specify a cut off date. However, to run it you need to download your Twitter archive. At the time of writing this can only be done once a month and has to be done manually. So, you could automate the running of the script but there’s still manual intervention required.

This Python script is similar but it lets you specify cutoff date as a number of days, not dates. Still, it requires downloading your Twitter archive manually.

This Ruby script works perfectly! You specify cutoff point in days and then when it is run it deletes any tweets older than that cutoff point. It even has the option to put in the ID of Tweets that you want to save. It only requires a developer account and you don’t need to download your archive.

There’s even a companion script that removes Likes. This doesn’t have any options for date cutoff but in my case it doesn’t matter. Once I’ve liked something once it doesn’t mean that I like it (or anything else that person has posted) forever so I’m not sure why I need to have my likes recorded and archived.

I decided to install both scripts on an always-on Raspberry Pi. Installing them took a bit of time due to it needing to install a bunch of Ruby gems. Once it was installed I set up a cron job to run the script at regular intervals. I have mine set to twice a day and to only keep the last two weeks of tweets. I feel that that is enough time for the tweets/memes to have whatever impact that they’re going to have. After two weeks they’re gone.

All of this effort to manage my experience of using Twitter might not be a solution and instead might be more of a distraction from the fact that the problem is Twitter, and maybe even social media in general. There have been many efforts from individuals to make social media better. On Facebook there is F.B. Purity which helps remove things like adverts, the On This Day feature and other things.

One of my favourite tools that I still use is the Facebook and Twitter Demetricator from Ben Grosser. These desktop-only tools remove mentions of the number of Likes, replies and retweets a post gets so that you can focus on the cat memes important things. These plugins have been getting a lot of attention recently. See Ben’s Instagram for more.

This of course doesn’t solve social media’s problems but just makes my experience of it just that little bit less stressful.

Development Update – July 2019

Select objects of similar size in Inkscape

For the AlgoMech 2019 festival in June I created a new performative drawing piece, A Perfect Circle. The piece is about how we interface with computers that analyse our activities. It consists of a video and accompanying plotter drawings.

Making A Perfect Circle presented me with a few challenges. The make the video element I hacked together a couple of Processing scripts that did basic motion tracking by following a user-specified colour. It would draw these lines, creating new lines (instead of adding to an existing line) at each major turn and giving them a unique colour.

The next stage was to export those drawn lines to SVGs (or PDFs) so that I could export them to Inkscape and then a plotter. Fortunately Processing already has functions for exporting to SVGs. Unfortunately for me if I were to implement this as is suggested in the help file it would export both the drawn line and the background video as a still frame. I produced a very hacky workaround (with help from Ben Neal) which “works” but produces a few unwanted artefacts.

Before I go on I should probably explain what a plotter is as the unwanted artefacts relate to it. For this I will copy from the Wikipedia article on plotters:

The plotter is a computer printer for printing vector graphics. Plotters draw pictures on paper using a pen. In the past, plotters were used in applications such as computer-aided design, as they were able to produce line drawings much faster and of a higher quality than contemporary conventional printers, and small desktop plotters were often used for business graphics.

At home I have a Silhouette Cameo 2 vinyl cutter. When using this great Inkscape plugin I can bypass Silhouette’s proprietary software and send artwork directly to the cutter from Inkscape. Thanks to a pen holder adaptor I can replace the vinyl cutting blades with a pen and turn the vinyl cutter into a plotter 🙂

Back to the Processing sketch. The hacky code that I made produced the desired lines but also it had lots of additional single-node paths/dots at the start of each line.

Removing these wouldn’t be very easy. Using Edit > Select Same > Fill and Stroke or Fill Color or any of the other options wouldn’t work as it would also end up selecting the lines. I then had the bright idea to select objects based on their size. All of the dots had a dimension of 4.057×4.000px, so in theory there would be an option like Edit > Select Same > Size. However, this is not so.

After a discussion on the Inkscape forum I opened a feature request on the Inkscape bug tracker to select objects of similar size. One thing I added to this was the idea of a threshold. Using this you could select objects that were within n% of the size of the selected object. If you’ve ever used GIMP you would have seen a similar function in its fuzzy selection tool This could definitely be useful if you trace bitmaps and it produces a lot of speckles. I also added a mockup to show how it could be applied to other options in the Edit > Select Same menu options.

Anyway, at the moment this exists as a feature request. I think Inkscape is concentrating on delivering version 1.0 of the software so I don’t expect to see this implemented any time soon. As with anything in the land of open source, if you’ve got the skills to do this please contribute!

In the end I used fablabnbg’s Inkscape extension to chain all (or most) of the paths into one big path. This made selecting the dots easier as I could just hide the big path(s) once they were chained together.

After that it was a simple case of sending it to the plotter!