Improviz gifs

Earlier this year fellow visualist and live coder Rumblesan commissioned me to make some gifs for his new live coding software, Improviz. In July he unleashed it into the world!

Looking at the above videos you could easily be forgiven for thinking that it looks a bit like LiveCodeLab. He is, after all, one of the developers of LiveCodeLab. However, Improviz differs in a few ways. As Rumblesan himself explains in the Toplap chat:

the language in Improviz has a lot in common with live code lab, and the basic functionality for shapes, styles, transformations and loops is all pretty much the same. but in terms of implementation and usage they’re very different

lcl is using three.js as an intermediary, whilst improviz is entirely haskell and uses opengl directly (which I think long term is going to cause me grief but we’ll see haha)

the major difference is that improviz lets you use images and gifs as textures, which is something I’d like to back port to lcl, but wouldn’t be a small task unfortunately

That’s right, you can load textures! As mentioned before Rumblesan commissioned me to make a set of gifs to go along with the initial public release. They’re all released under a Creative Commons Attribution licence so you’re free to use them as you wish as long as you attribute me.

As an added bonus I’m also releasing the .blend file that was used to make each one.

Click here to download the Blender files.

These were made using a beta version of Blender 2.80. I’ve tested them in the stable release and they appear to work fine but they definitely will not work in 2.79 or earlier versions. I’m providing these for you to explore and won’t be doing a writeup/tutorial on how they work. If you remix them please share what you make πŸ™‚

Definitely give Improviz a try! Thanks to Rumblesan for commissioning me to make the gifs πŸ™‚

Development Update – August 2019

What’s happening on Twitter

The following is compiled from a bunch of Tweets that I made in December 2018. After reading you’ll see why I have to write it here! While it is not directly related with programming or making art, it does help with Getting Things Done, so I decided to include it here.

Like many people I’ve started to remove myself from a lot of social media websites. First was Facebook in 2017. The reason for this is that was really annoyed that it was using nostalgia to manipulate me into staying on the website. In shoving 10 year-old photos into my view through the On This Day feature it was giving me little hits of dopamine by reminding me of the good ol’ times, even if they were 10 years ago with people that, for whatever reason, are no longer part of my life.

One solution to this was to make sure that Facebook only had recent information about me. I started manually deleting anything that was more than 2 year old. I eventually found a Chrome plugin (use at your own risk) that made it easier to do but this process was a chore that ultimately didn’t solve the fact that Facebook was the problem. After about a year I left unannounced. After deleting my account, of course.

My “relationship” with Twitter is a bit different. I’ve always preferred it over Facebook as it isn’t as intrusive, at least not directly. It doesn’t constantly ask you to share who you’re dating, identify your family, upload photos from your night out or tag your friends in everything. Instead it felt like it was more concerned with what was happening at that moment.

Like Facebook, though, I became a bit concerned with how much data about me it was storing. I started using the website in 2008 (Facebook in 2007) and have used it almost daily since then. Over that time I have grown and changed as a person many times over. I don’t want this history to be fully documented and, more importantly, available for anyone to browse through. Whilst the majority of the 40k tweets I accumulated over that period probably consists mostly of cat gifs, memes and the word “lol”, maybe there’s there events that I’d rather not have documented, like Tweets showing friendships and relationships falling apart, embarrassing photos of myself or others on nights out, or even just me saying something that was totally out of order.

I’m glad that I have friends (and enemies) that have called me out on my bullshit and hope that they continue to point out times when I do something wrong. However, I’d rather that the trail of data I leave on these sites that I use every day reflected me as I am now, not who I was 10 or even 20 years ago.

So, I went on a mission to find a way to keep my Tweets current. I needed a tool, or tools, that would automatically delete Tweets older than a certain time period.

A lot has been written about Tweetdelete. However, I don’t want to rely on a third party service. Many people do trust the service, but there’s always risks in using third party services, especially when they have access to a lot of your information. Then there’s the risk that it could one day shut down so I decided that I wanted something that I could deploy myself.

Deploying your own script requires that you register a developer account on Twitter.

Delete tweets is a Python script that let’s you delete tweets and specify a cut off date. However, to run it you need to download your Twitter archive. At the time of writing this can only be done once a month and has to be done manually. So, you could automate the running of the script but there’s still manual intervention required.

This Python script is similar but it lets you specify cutoff date as a number of days, not dates. Still, it requires downloading your Twitter archive manually.

This Ruby script works perfectly! You specify cutoff point in days and then when it is run it deletes any tweets older than that cutoff point. It even has the option to put in the ID of Tweets that you want to save. It only requires a developer account and you don’t need to download your archive.

There’s even a companion script that removes Likes. This doesn’t have any options for date cutoff but in my case it doesn’t matter. Once I’ve liked something once it doesn’t mean that I like it (or anything else that person has posted) forever so I’m not sure why I need to have my likes recorded and archived.

I decided to install both scripts on an always-on Raspberry Pi. Installing them took a bit of time due to it needing to install a bunch of Ruby gems. Once it was installed I set up a cron job to run the script at regular intervals. I have mine set to twice a day and to only keep the last two weeks of tweets. I feel that that is enough time for the tweets/memes to have whatever impact that they’re going to have. After two weeks they’re gone.

All of this effort to manage my experience of using Twitter might not be a solution and instead might be more of a distraction from the fact that the problem is Twitter, and maybe even social media in general. There have been many efforts from individuals to make social media better. On Facebook there is F.B. Purity which helps remove things like adverts, the On This Day feature and other things.

One of my favourite tools that I still use is the Facebook and Twitter Demetricator from Ben Grosser. These desktop-only tools remove mentions of the number of Likes, replies and retweets a post gets so that you can focus on the cat memes important things. These plugins have been getting a lot of attention recently. See Ben’s Instagram for more.

This of course doesn’t solve social media’s problems but just makes my experience of it just that little bit less stressful.

Development Update – July 2019

Select objects of similar size in Inkscape

For the AlgoMech 2019 festival in June I created a new performative drawing piece, A Perfect Circle. The piece is about how we interface with computers that analyse our activities. It consists of a video and accompanying plotter drawings.

Making A Perfect Circle presented me with a few challenges. The make the video element I hacked together a couple of Processing scripts that did basic motion tracking by following a user-specified colour. It would draw these lines, creating new lines (instead of adding to an existing line) at each major turn and giving them a unique colour.

The next stage was to export those drawn lines to SVGs (or PDFs) so that I could export them to Inkscape and then a plotter. Fortunately Processing already has functions for exporting to SVGs. Unfortunately for me if I were to implement this as is suggested in the help file it would export both the drawn line and the background video as a still frame. I produced a very hacky workaroundο»Ώ (with help from Ben Neal) which “works” but produces a few unwanted artefacts.

Before I go on I should probably explain what a plotter is as the unwanted artefacts relate to it. For this I will copy from the Wikipedia article on plotters:

The plotter is a computer printer for printing vector graphics. Plotters draw pictures on paper using a pen. In the past, plotters were used in applications such as computer-aided design, as they were able to produce line drawings much faster and of a higher quality than contemporary conventional printers, and small desktop plotters were often used for business graphics.

At home I have a Silhouette Cameo 2 vinyl cutter. When using this great Inkscape plugin I can bypass Silhouette’s proprietary software and send artwork directly to the cutter from Inkscape. Thanks to a pen holder adaptor I can replace the vinyl cutting blades with a pen and turn the vinyl cutter into a plotter πŸ™‚

Back to the Processing sketch. The hacky code that I made produced the desired lines but also it had lots of additional single-node paths/dots at the start of each line.

Removing these wouldn’t be very easy. Using Edit > Select Same > Fill and Stroke or Fill Color or any of the other options wouldn’t work as it would also end up selecting the lines. I then had the bright idea to select objects based on their size. All of the dots had a dimension of 4.057×4.000px, so in theory there would be an option like Edit > Select Same > Size. However, this is not so.

After a discussion on the Inkscape forum I opened a feature request on the Inkscape bug tracker to select objects of similar size. One thing I added to this was the idea of a threshold. Using this you could select objects that were within n% of the size of the selected object. If you’ve ever used GIMP you would have seen a similar function in its fuzzy selection tool This could definitely be useful if you trace bitmaps and it produces a lot of speckles. I also added a mockup to show how it could be applied to other options in the Edit > Select Same menu options.

Anyway, at the moment this exists as a feature request. I think Inkscape is concentrating on delivering version 1.0 of the software so I don’t expect to see this implemented any time soon. As with anything in the land of open source, if you’ve got the skills to do this please contribute!

In the end I used fablabnbg’s Inkscape extension to chain all (or most) of the paths into one big path. This made selecting the dots easier as I could just hide the big path(s) once they were chained together.

After that it was a simple case of sending it to the plotter!

Development Update – June 2019

Making digital art is quite a lengthy process and even moreso if you’re using non standard processes or making your own software. For awhile I’ve wanted to write about my processes and how I’ve overcome the bugs and problems. In what will hopefully be a regular series of blog posts I’m going to give a bit of insight into this process. Let’s go!

Convert Object texture coordinates to UV in Blender

For Visually Similar I wanted to texture each 3D model using lots of images found on the internet. Rather than create one single material containing a texture with all of the found images I instead decided I would add a material for each image texture and, using their alpha channels, composite them over each other.

If you’ve ever had to position something accurately on a UV map you’ll know how much of a pain it can be. So fortunately, in the Texture Coordinate node you can use the Object outlet to another object (usually an empty) as the source of its coordinates. This uses the reference object’s local Z direction as its up direction.

So far,so good, except it did not yet work in Blender’s new EEVEE rendering engine. Yes, yes, I know EEVEE is still under development and shouldn’t be used in production etc. Still, after doing a bit of research it looks like this is going to be implemented.

So, I had a rather smrat idea as a workaround. Could I take the UV coordinates generated by the Object oulet whilst using Cycles and paste those into the UV texture options using a Mapping node? Short answer: no. To do this I would need some sort of viewer or analyser node that would show me the data being output from a node. So, I suggested this idea on the Right-Click Select ideas website. A healthy discussion followed and hopefully something will come of it.

In the end I had to resort to baking the texture and then applying that to the 3D model. In doing this I learnt that baking a UV texture on a complex model will take a lifetime, and so I had to do it on a decimated model and then put that on the original, complex model. This, of course, created some unwanted artefacts. *sadface*

Since I originally encountered this problem it has actually been addressed in a Blender update! However, it only works at render time but it’s progress! πŸ™‚

The search for a GrabCut GUI

Another big part in creating the Visually Similar artwork was the image textures themselves. The idea for the piece is that the textures would be related in some way to the 3D model. I decided from the beginning that I wanted to have some control over this and so I gathered the images through keyword searches and reverse image searches.

But then I needed to cut out certain parts of them. I wanted it to look like a rough collage, as if the images were pages in a magazine that had been ripped out, leaving behind tears and occasionally ripping through the important bits.

For awhile on of my Twitter friends, _xs, has had a bot on their feed that generates random collages. I haven’t studied the source code extensively but I’m guessing it does a keyword search and makes a collage out of the returned images.

What I was really interested in was how the images were cut out. It’s as if a sort of automatic feature extraction was used but wasn’t very accurate and so it left behind jagged edges that were almost reminiscent of the kind of ripped magazine aesthetic that I mentioned earlier.

Through a conversation with them I learned that they used a combination of automated object detection (to select the region of interest) and GrabCut to perform this automatic foreground extraction. Grabcut has been part of OpenCV for quite some time. Give it a region of interest (ROI) and it will attempt to extract the foreground.

_xs used this via the command line and automated the whole process. I needed a bit more control over defining the region of interest and so I needed a GUI where I could use a bounding box to select this. This is where the long hunt began.

OpenCV has its own GrabCut GUI example but it has an annoying flaw.

To select the ROI it displays the source image at full size. Meaning that if your source image is 4000 pixels wide it won’t fit on your screen (unless you have a fancy pants 4K screen). Not ideal when trying to select an ROI. What I needed was a way to scale the window to fit on my screen but still process a full resolution image.

If you search Github you’ll see a number of people have created GUIs for GrabCut, possibly for study assignments. However, each has their own problems. Some won’t compile, some resize the input and some have been abandoned. According to this 2006 article there was even once a GUI for GrabCut in GIMP. However, despite my best efforts I can’t seem to find it.

One night at OpenCode I learnt that OpenCV has a method for selecting an ROI! It even auto resizes the window but not the input image. Yay! So, I hacked it together with GrabCut and released my own very hacky Grabcut GUI. It appends to the file name the coordinates and dimensions of the ROI should you want to run this again but via the command line.

All this done with a mere seven days until the artwork had to be finished!

Typewriter text

For the Algorave at British Library in April I was asked to make a promotional video for it, which proved a difficult but for a very specific reason. I wanted to emphasise the liveness of live coding and show code being typed. For this I used the code supplied with Alex McLean aka Yaxu’s excellent Peak Cuts EP.

The effect of having the text appear word-by-word or letter-by-letter is often called the typewriter text effect. I’ve previously written about how to do this in Pure Data/GEM. I needed to have a bit more control than what I got in PD, and I needed to export as transparent pngs so this solution wouldn’t work.

Kdenlive once had such an effect built into its title editor. Other solutions that used Kdenlive use a mask to reveal the text, which produced more of a fading in effect that wasn’t ideal. It was also a lot of manual work! I had several hundred lines of text so doing this was going to add a lot of time.

Natron was the next contender. Since 2017 it has had a plugin for doing typewriter text but it’s a bit broken. In theory in gives me the most flexibility in how I create it but in practice I still can’t get it to render!

I also considered using ImageMagick and was even provided with a solution (that was written for Windows). As much as I like automation and command line software, for this very visual task I needed to see what I was working on.

Finally, I turned to Blender, which gave me a few options, including rendering the text as 3D objects within the Blender project itself. After failing to get this Blender addon to work I tried using Animation Nodes. Following a tutorial I was able to set up quite a typewriter effect quite quickly. However, this is where I encountered a bug. After around 10 frames of the text were rendered the rest of the frames would take forever to render. Even in EEVEE each frame was taking about 10 minutes to render. I have no idea why this was. Perhaps it’s because 2.8 is in beta. Maybe because Animations Nodes for 2.8 is also in beta. Beta beta beta. Either way it wasn’t working.

So I thought maybe I could “bake” the animation which would remove the Animation Nodes dependency and maybe speed up the render. Sadly this was also not to be. Text objects can’t be baked πŸ™

In the end I had to do an OpenGL render of the animation to pngs with a transparent background. How this differs from a normal render is that it renders the viewport as is. So if you have your gizmos on there it’ll render them out as well. Not ideal but it worked.

I would like to think it all stopped there but it did not.

Blender can have a video or series of images be a texture. However, at the time this was not possible in 2.8 using EEVEE. To my joy, however, this was implemented only a couple of days after i needed it!

So that is some insight into how I make some of my art. There’s a lot of problem solving, lots of showstopping bugs and lots of workarounds. Somewhere in that process art is made! I’m hoping to do these every month but we’ll see how that goes.

V&A Friday Late: Copy / Paste – 29th March

On Friday 29th March 18:30 – 22:00 I’ll be showing two new videos at Copy / Paste at V&A.

Gif by Erin Aniker

Human culture is built on a history of replication. We copy to learn, to assimilate, to preserve and to magnify. How is this behaviour being transformed by advances in technology and what is the value of the authentic or the original today? This Friday Late, watch dance pieces to examine how human error impacts repetition and examine the role of copying in preserving cultural heritage. From architecture to online identities, explore duplication in the digital age.

I’ll be showing two videos titled Visually Similar:

Visually Similar is a video work that examines how images and videos posted online can be used to preserve history, but can also be remixed to create new narratives. In sharing our work online we make a permanent record of a point in time, which can then be used out of context.

I’ll be there IRL too if y’all have questions. Check out the rest of the awesome programme too!

Algorave at SXSW – 10th – 12th March 2019

Algorave is heading to South By Southwest! From 10th – 12th we’ll be doing two showcases at the festival:

Dancing to Algorithms: How to Algorave

This session will discuss Algorave: a global movement focussed on creating dance music through the writing and editing of algorithms. At an Algorave, performers live code and project their screens for the audience to see the creative process unfolding. Since it emerged in the UK in 2012, Algoraves have taken place around the world, with large communities developing in North America, Japan, Europe and Latin America. To understand how the scene has expanded, we will bring together leading performers from the community to discuss the systems they use and the sounds that they make.
In a world where algorithmic processes are becoming so embedded in our daily lives and increasingly opaque, we hope to uncover why live coding is so exciting!

Please bring and laptop and some headphones.

This session takes place 15:30 – 17:30 on Sunday 10th March and will be led by me, Joanne, Shelly Knotts and Alexandra Cardenas

Lush Presents Algorave: Live Coding Party

Since emerging in the UK in 2012, Algorave has subsequently become a global movement with parties happening across the world. At an Algorave, performers live code and project their screens for the audience to see the creative process unfolding. It’s a truly audiovisual experience where sound and visuals merge together. This showcase will bring together an international host of leading performers from the Algorave scene. From minimal techno to bursts of noise all sounds and visuals will be generated through algorithms for your pleasure.

Part of Future Art and Culture produced by British Underground and supported by Arts Council England.

This showcase takes place 22:00 – 02:00 on Tuesday 12th March and features ALGOBABEZ, Alexandra Cardenas, Belisha Beacon, Byrke Lou, co34pt, Coral Manton, hellocatfood, Scorpion Mouse, Renick Bell.

Come and say hi!

Departure from Vivid Projects

It was recently announced that after four years of leading Black Hole Club and seven years total of working with Vivid Projects (and previously VIVID) I’ve decided to leave to focus on my own artistic and curatorial practices.

I’ve really enjoyed curating exhibitions there and working with the Black Hole Club artists to develop their practices. I could never have guessed that from my first interactions with Vivid Projects in 2009/2010 with the fizzPOP Howduino and GLI.TC/H 20111 that I would go on to become a core part of the team.

My heartfelt thanks go to everyone at Vivid Projects past and present who has welcomed me with open arms and helped me grow as an artist and curator. They’ve always been excited by the digital arts and have provided vital support to me in curating exiting exhibitions in this developing field. This has helped me to exhibit the work of over 100 national and international artists over seven years. I’m proud of everything that I’ve achieved with Black Hole Club over four years and it’s been truly inspiring seeing the artists involved develop their careers and go on to exhibit nationally. However, at this point in my own career I feel it’s time to focus on my own independent artistic and curatorial practices. I wish everyone at Vivid Projects the best of luck and want to say thanks again to Yasmeen Baig-Clifford for her support, encouragement and dedication. Without her work the digital and media arts in the West Midlands wouldn’t be as lively as it is now.

Black Hole Club Producer opportunity

Singing Litter

With my departure Vivid Projects is now looking for a Producer to lead the Black Hole Club. From the Vivid Projects website:

The Producer will develop and deliver Black Hole Club artists’ projects, exhibitions and events, supporting approximately 20 artists per year to develop their creative practice, present work to public audiences, and widen their professional networks. The core programme runs 1 March-31 December each year; each cohort is selected in January and launched on the first Friday of March.

The Black Hole Club Producer should be excited by collaboration and risk taking, with experience drawn from areas including digital art, live performance, experimental audio, film and video, animation and computer-generated art.

If this sounds like your kinda job go download the application forms. Deadline for applications is 18:00 30th January.

<2019>

</2018>

What a busy year! I think compared to previous years 2018 was filled with more (Algorave) performances and projects and less exhibitions and gifs. 2018 was also the year that Vivid Projects became one of Arts Council England’s National Portfolio Organisations, which basically means that the gallery has funding for the next four years. Because of this my workload there increased and so, unlike previous year-in-review blog posts I’ll be including an overview of my work there πŸ™‚

January

This month started off really busy with the opening of two exhibitions in London. The first was Basquiat’s Brain at Barbican. The exhibition in the foyer (near the exit of the curve gallery) was the culmination of the work I’d been doing with the Barbican’s youth group and imagines what Basquiat’s take on art could’ve been if he were alive today and working digitally.

Basquiat's Brain

Basquiat's Brain

It was only supposed to be on display for a weekend but went on to be exhibited for a few months!

Only a week later Transformative Use had another showing at the Granular exhibition at the University of Greenwich (which, btw, is liek really far from many things).

Granular: The Material Properties of Noise

My first Algorave performance of the year took place at the National Video Game Arcade’s All Your Bass Algorave event.

Elsewhere I started an Instagram account just for my art. I always felt a bit weird forcing my friends to see promotional posts about my art and exhibitions alongside personal family/friends stuff, so this kinda solves that.

February

For February I was mostly ill and preparing for the launch of Black Hole Club in March. Elsewhere stills from Basquiat’s Brain went on display on the Shoreditch Digital Canvas. Cue lots of friends sending messages asking if my big face is on a billboard!

Basquiat's Brain on Shoreditch Digital Canvas

March

At the beginning of the year I started doing workshops with the Barber Collective, which is Barber Institute of Fine Arts youth group. Over a few sessions we made animations by remixing images from the Barber’s collection. For University of Birmingham’s Arts and Science Festival we projected the finished animations on the Old Joe clock tower for one night.

Re-Animation

I then did a day of LiveCodeLab workshops for the Imagine If event at Tate Britain.

Imagine If

Black Hole Club was supposed to have its launch exhibition on 2nd March but the snow, cold weather, and the absence of insulation and heating in warehouse spaces that art galleries tend to occupy forced us to reschedule. On 30th March the cohort finally had their first exhibition.

Black Hole Club 2018 launch

Black Hole Club 2018 launch

April

A few days later couple of my videos were on display at Late at Tate Britain: Echoes.

Later in the month my commission for Spon Spun’s 2017 Art Trail was on show in the CET Building.

Spon Spun 2017: Commissions and Prize Winners

The dark industrial building was certainly a much more effective venue for the LED infinity mirrors.

I spent a little over 48 hours in Karlsruhe for an Algorave at ZKM.

Algorave Karlsruhe

I got back on a plane this time to Seveille, Spain, to deliver a presentation about No Copyright Infringement Intended at Libre Graphics Meeting (LGM). I’d previously presented about glitch at at LGM in Toronto in 2015, so it was good to be back around my peers and see how the libre graphics community has developed over the years. Y’all can watch my presentation below.

Being in the room with like-minded people allowed me to go into more of the nuances of the exhibition’s theme and spend less time on educating people about what copyright is. You can hear some of the questions at the end of that video.

To round off this busy month Black Hole Club their second exhibition, Stellar. This exhibition, co-curated with Lumen, featured works that responded to celestial events. It was also lit af 🔥🤘😩🤘🔥.

Stellar

Stellar

May

I presented an overview of my Curating the Machine research at Phoenix’s Art-AI Festival.

It’s a good video to watch if you’re curious about my still ongoing research.

The biggest event of this month saw me in Stockholm, Sweden to perform a new piece, Digital Domestic, which was commissioned by Aly Grimes (she previously commissioned me for Short Circuit Project).

The Digital Domestic

The Digital Domestic

June

I was back at Tate Britain, this time IRL to do a workshop inspired by stained glass for their Late at Tate Britain: Spire event. I, of course, reworked Glass πŸ˜‰

Late at Tate Britain: Spire

Late at Tate Britain: Spire

Cheltenham Science Festival invited myself, Joanne Armitage, Alex McLean, and Joseph Wilk to do a mini Algorave. Having a team of technicians at hand who could install projectors and move screens at a moment’s notice was a welcome change to the usual DIY warehouse events.

Cheltenham Science Festival Algorave

Still in June myself and Aly Grimes teamed up to bring Living Room Light Exchange (LRLX) to Birmingham. I had first come across it when I was invited to talk at one in Paris by Benjamin Gaulon in 2016. I really liked the relaxed and personal nature of it and so, with their permission, worked with Aly to bring it to Birmingham. For the first LRLX we had presentations from Duncan Poulton and Emily Roderick.

LRLX Birmingham #1

The biggest event of June was Assembly Birmingham. The Assembly events, organised by the (impossible to Google) a-n aim to “support artists to lead debate on and open up discussion about the things they need for a sustainable career”. I organised the Birmingham evnt in the newly reopened Eastside Projects which invited loads of new(ish) organisations to talk about their experiences of being based in Birmingham and the West Midlands and what their hopes and fears were for the future.

Assembly Birmingham

Assembly Birmingham

Pete Ashton did a darn good writeup of the day for a-n. a-n have also started uploading videos of the presentations.

To round off the month Black Hole Club launch the Another Dimension exhibition, which looked at optical illusions.

Another Dimension

July

This month opened with the second LRLX event which featured Edie Jo Murray and Dinosaur Kilby.

LRLX Birmingham #2

If you needed a condensed version of my New Now research an “Insight Film” was uploaded this month.

August

This month was really quiet for my own work. I did a few small events and workshops for Vivid Projects and then went to Green Man Festival to talk about my artwork. This is the closest I have been to having a month off!

September

I organised the Visualists Meetup for the Livecode festival. We had a couple of visuals workshops but what was most important was the discussion around the role of visuals at Algoraves. A feeling that is shared amongst people doing visuals across all music genres is the feeling of being an afterthought or second best to the the musicians. We all shared our experiences but also how we can move forward to a more collaborative environment. There will be a fuller discussion at ICLC in Madrid in 2019.

Later that night I performed a huuuge Algorave at DINA.

Livecode Festival #2 Algorave

Livecode Festival #2 Algorave

A few days later Black Hole Club had its first online exhibition.
blackholeclub.com

In my experience of working with artists and institutions many of them see the internet as a promotional tool. Through this exhibition I wanted to see how the cohort’s practice could be translated to the internet where many of the IRL restrictions of space and time either don’t exist or are transformed. For example, in IRL land there’s a logical way to navigate a space and work can be viewed without distractions. On the internet we’re often fighting for attention from ads, 100 other tabs and, well, each other. This was Black Hole Club’s first online exhibition, so not all these issues were explored but I think we made a good start! View the exhibition here: https://blackholeclub.hotglue.me/

Over in Finland the “Glitch Art” exhibition opened at Kuntsi Museum of Modern Art. It features my work What is your glitch? 1bitgifavibmpbmpcmykbmprgbjpgmpgpcxpixpngppmsgisvgtgawebp and Unstable Mediums alongside works by Rosa Menkman and JODI . Go see it before it closes on 13th January 2019 (or pay for my flight and I’ll come with you πŸ˜‰ ).

Glitch Art - Kunsti Museum of Modern Art

October

I was in planes for what like felt like forever to play at an Algorave in Odense, Denmark.
Algorave Odense

I did a public discussion with Eyal Gruss at Near Now in Nottingham. He was one of the folks who heavily influenced my Curating the Machine project. Video will be online soon I hope.

To close the month I organised the Birmingham Algorave at Vivid Projects to close their/our Mediafest programme.

Algorave Birmingham

Algorave Birmingham

November

More performances this month, the first being at databit.me in Arles, France. I first performed at databit.me back in 2012 as artist in residence, and then again in 2013. There’s so many things I like about this festival, but above everything I love the people (and the food) and the sense of community (and the food).

For databit.me I did my first ever live coded music performance! This took place in a barn in Tarascon on a horse-drawn carriage:

databit.me 2018

I went on to do another two performances in the festival in similarly weird places. Y’all can listen to a bit here.

databit.me

In a surprise to many, including myself, the next week I took all the flights to perform at the opening event of Piksel in Bergen, Norway. Can I just state the obvious and say that Norway is dope af. Everything’s just so clean and tidy. It’s also cold and, like, there are lots of hills everywhere but whatever idc. I wish I as there for more than two days.

I think I’m now going to make it a requirement that when I perform it needs to be in as unconventional a space as possible. For my Piksel performance I did visuals in a band stand in the city centre.

Piksel 2018

I also had an updated version of Copyrgiht Atrophy on display as part of their Pikselsavers programme.

Piksel 2018

December

It’s supposed to be a month to wind down but instead in December I was doing lots of preparatory work for things happening in 2019. The only public event was the last Black Hole Club exhibition of the year and the launch of their publication.

End of Year Show

End of Year Show


2018 was certainly one of my favourite years for Black Hole Club. Part of this was due to having funding which allowed me to focus more on building and delivering the programme but also the cohort was 🔥. Y’all can still apply to be part of Black Hole Club in 2019.

And so, 2018 is now over. A big thanks to all those who have helped make things possible πŸ™‚ I feel like I’m at a turning point in my career and so next year I will be exploring some other things. Not a massive depature from my usual artwork or curatorial stuff but perhaps more narrowly focused. Until then, happy new year!

Livecode Festival #2 – Visualists Meetup – 1st September 2018

On Saturday 1st September I’m organising a meetup for visualists as part of Livecode Festival #2 at Access Space in Sheffield:

A session for live coding visualists (at any level) lead by Antonio Roberts (aka hellocatfood), to talk about their tools and how they perform, with focus on Algorave visuals.

A core part of the session will be discussion around key questions for live code visualists; how do you pace yourself in a performance? Should we aim to build up slowly or go straight in with loud visuals? How much can you truly respond to the music? Is it important to show the code, and how does it fit with the musician’s projection?

The session will run from 11:00 – 16:00 and will include workshops in Pure Data/GEM (led by me), Hydra (led by Will Humphries) and Livecodelab (led by Guy John).

Get your tickets here! And whilst you’re in the area get a ticket for the Algorave on the same night at 20:00 πŸ˜‰