I Am Sitting In A Room acquired by Government Art Collection

I’m happy to share that my 2010 video, I Am Sitting In A Room, has been acquired by the Government Art Collection as part of Art X-UK, a permanent collection of works by 45 visual artists from across the UK.

In this difficult and unusual year, the Collection invited the networks for nine regions in England, and the networks in Wales, Northern Ireland and Scotland, to nominate artists as part of a special project, Art X-UK. Our collection curators went on a virtual tour of artists’ studios from Penwith to Ballygally, to select works for the Collection to display in government buildings worldwide. Art X-UK, supported by the Advisory Committee on the Government Art Collection, was a unique way of responding to the impact of COVID-19 on the visual arts sector.

This project has enabled the Collection to support 45 artists, acquiring over 90 works, and spending £230,000 across the UK. As part of the nomination process, we asked each network to form a group and put forward the artists’ names, providing a statement on their selection process for transparency, including a list of the selectors. Asked to consider diverse representation of artists within each region, 24 of the artists are women, 2 identify as non-binary, 20 as minority ethnic, 6 as LGBTQ+, and 4 with disclosed disability.

I Am Sitting In A Room is my take on the 1969 artwork of the same name by Alvin Lucier, in which he recorded himself narrating a text, then played the recording back into the room, re-recorded it, played it back etc.

In my take I used a process of repeatedly glitching font files. I documented this process in several blog posts at the time, including a recent blog post where I tried to recreate the work in 1920×1080 resolution.

I’m really happy that they chose to acquire this piece as it marked a significant moment in my career as a digital artist. It was one of my first works to incorporate automation/programming and the work being screened at GLI.TC/H 2010 in Chicago represented one my first significant international screenings. The piece even was screened at Alvin Lucier’s 80th birthday celebrations in 2011!

Really happy that my work is part of this round of acquisitions along with a lot of great artists 🙂

Controlling Improviz Using Midi via OSC

In 2020 I did quite a number of workshops in using the Improviz visuals live coding environment. Improviz can be thought of as a fork of Livecodelab, especially as its developer, Guy John, is one the developers of Livecodelab. However, it has some key differences that make it stand out as its own unique software:

  • It works on the desktop, and I think it is faster because of it
  • The language is more fully documented
  • You can load your own textures, gifs, 3D models, and shaders

Being able to load your own textures might in itself be a reason for many people to switch from Livecodelab to Improviz. Things can be that just a bit more personalised when you’re using your own images and objects rather than only colours, gradients and basic geometrical shapes. Another potentially useful difference is that in Improviz you can interface with the software using Open Sound Control (OSC). This opens up the possibility of using software or external hardware devices. In this blog post I’ll take you through how you can connect a midi controller to Improviz via OSC and Pure Data.

To get started you first need to define a variable in Improviz that you want to be changed by OSC/midi. The name of this variable can be anything as long as it’s not a name already used as a function or variable in Improviz. Check the reference page for a list of reserved names. In my example I’ve used the variable name size.

size = ext(:size, 1)

Next, we need to connect to it via osc so that we can change its value.

When you launch Improviz via the terminal one of the messages you’ll see printed is the port it is using for sending message over OSC.

2021-03-25 20:53:.732595  INFO: Running at 640 by 480
2021-03-25 20:53:.732733  INFO: Framebuffer 640 by 480
2021-03-25 20:53:.390032  INFO: Loaded 3 texture files
2021-03-25 20:53:.437047  INFO: Loaded 8 material files
2021-03-25 20:53:.441641  INFO: Loaded 5 geometry files
2021-03-25 20:53:.441718  INFO: *****************************
2021-03-25 20:53:.441766  INFO: Creating Improviz Environment
2021-03-25 20:53:.466755  INFO: Loading ./stdlib/variables.pz
2021-03-25 20:53:.466846  INFO: Loading ./stdlib/transformations.pz
2021-03-25 20:53:.466890  INFO: Loading ./stdlib/shapes.pz
2021-03-25 20:53:.466930  INFO: Loading ./stdlib/style.pz
2021-03-25 20:53:.466968  INFO: Loading ./stdlib/textures.pz
2021-03-25 20:53:.467004  INFO: Loading ./stdlib/screen.pz
2021-03-25 20:53:.467039  INFO: Loading ./usercode/grid.pz
2021-03-25 20:53:.467078  INFO: Loading ./usercode/seq.pz
2021-03-25 20:53:.467116  INFO: Improviz OSC server listening on port 5510
2021-03-25 20:53:.467297  INFO: Improviz HTTP server listening on port 3000
2021-03-25 20:53:.467405  INFO: Improviz resolution: 640 by 480

Of course you can, at this stage, use any software that can send data over OSC, but for this blog post/tutorial I’ll be using Pure Data. Alternatives exist but I like using it as it’s lightweight, stable and is cross platform.

To send OSC messages use the [netsend] object to connect to the same ip address as Improviz (usually 127.0.0.0) and same port (5510). [udpsend] will output a 1 from its only outlet to show a successful connection. With the connection established I can now send values from a number box to the variable via OSC!

Right now I’m using number box which has its values being set by me manually clicking and dragging. I could have the numbers being generated randomly by using the [random] object, or even have some level of audio reactivity by using the [adc] object. If that’s your thing you do it! Keeping to this blog post’s title I’ll be using a midi controller to change these values. For this next stage you should know that I’m using Ubuntu (20.10) as my operating system. This means that the instructions, especially those concerning connecting a midi controller, may be different for your operating system. Sadly I can’t help with that.

Connecting a midi controller to Pure Data is quite easy. I’m using an Akai MPK Mini MKII, but the instructions on connecting the controller are the same for pretty much any midi controller. First make sure that Pure Data is exposing at least one midi port. Change your midi backend to ALSA-MIDI in Media > ALSA-MIDI. Then go to Media > MIDI Settings… and make sure you have at least one midi input.

Then, open QjackCtl, click on the Connect button and under the ALSA tab connect the MPK Mini Mk II output port to the input port of Pure Data.

In Pure Data you can now read the Control Change (CC) values of a one of the knobs or pads using the [ctlin] object. On my MPK the first dial (K1) is [ctlin 1]. It outputs values from 0 – 127 (128 values). I want it to change the size of a cube from 0 – 4, so I need to map the ranges. I found this very handy mapping abstraction so I’ll be using that. With the ranges mapped I can use the knob on my controller to change the size!


Pure Data patch in Improviz code is here: pd_improviz_4.zip

For my next trick I want one octave, C5 to G5, to alter the shades of grey of the cube. The [notein] object will tell me the current midi number of the key being pressed. From that I can deduce that C5 to G5 is midi notes 48 – 59. Using the [maxlib/scale] object again I can map those ranges to 0 – 256 and send those values over OSC to a variable in Improviz that will be used to change the fill function.


Pure Data patch in Improviz code is here: pd_improviz_5.zip

For my final form I’ll use one of the pads on the midi controller to toggle a random colour generator.


Pure Data patch in Improviz code is here: pd_improviz_6.zip

One of the possibilities of using a midi controller to control visuals in this way is that you can control the audio and visuals simultaneously, rather than one being triggered in reaction to the other. In my experience of doing live visuals it has been quite normal for visuals to move or, as is quite often the case, pulsate in reaction to the amplitude of the music. In fact I did this many years ago for a video for My Panda Shall Fly.

What I’ve sometimes noticed is that there’s latency and the reactive visuals often feel like they’re coming in too late after the beat/instrument has hit. Of course the latency can be reduced by adjusting the sensitivity of the audio input device (microphone or line in) but then it’s a fine balancing act of both the musician and visualist adjusting levels. Achievable but a pain!

By having one device/controller triggering both you can, in theory, have both happen simultaneously. Here’s a demonstration of this from October 2020

As you can see the midi controller is controlling both the visuals and the audio. When I eventually get back to performing live gigs this is definitely something I’m going to explore further. Until then, have fun mixing live coding with midi controllers!

(Algo|Afro) Futures

I’m happy to launch (Algo|Afro) Futures, a mentoring programme for early career Black artists in the West Midlands who want to explore the creative potential of live coding.

Live coding is a performative practice where artists and musicians use code to create live music and live visuals. This is often done at electronic dance music events called Algoraves, but live coding is a technique rather than a genre, and has also been applied to noise music, choreography, live cinema, and many other time-based artforms.

(Algo|Afro) Futures will take place between April – June online and at Vivid Projects and will consist of four sessions. Dates will be confirmed in response to lockdown restrictions and participant availability.

Algorave Birmingham

Four participants will receive mentorship from myself and Alex McLean on all things live coding. Each participant will receive a fee of £100 per mentoring session attended plus reasonable travel expenses.

This opportunity is open for Black West Midlands-based artists only. The call is open now until 23:59 GMT on 14th March . Further information about the programme, FAQs and the application form can be found at the (Algo|Afro) Futures website.

Late at the Library: Algorave

(Algo|Afro) Futures is organised with FoAM Kernow and Vivid Projects, in collaboration with and funded by the UKRI research project “Music and the Internet: Towards a Digital Sociology of Music

<2021>

</2020>

2020 was definitely a hard year, which feels a pretty repetitive and redundant thing to say at this point. I did try to stay creative, and did create things, but sometimes I just felt like staying still and watching the world crumble around me. At times just getting out of bed before 12:00 felt like enough of an achievement for one day.

When exhibitions or performances did eventually happen, albiet online, it all felt a bit anticlimactic. Sometimes months of work would go into preparing for an online exhibition or performance. After the adrenaline of the event wore off there was no release, no celebration, no friends around to hug or high five. Just a sudden comedown, get your pajamas on and realise that you’ve not travelled more than 50 metres from the kitchen in days. Oh, and the world is still falling apart.

So, with that cheery start here’s most of the things that I got up to in 2020.

January

As usual not much happened in January. Imagine that, an uneventful month. How I’d wish for that right now…

February

In early February I made my way to Limerick to attend ICLC and perform at an Algorave with Maria Witek (mxwx). I feel that the academic side of live coding sometimes passes me by, but what I do like about events like these is the critical reflection on the practice and the gathering of artists from all parts of the world. It helps to remind me that live coding is a global thing, not just UK/western world.

I really enjoyed performing with Maria. You can see a bit of our performance from around 03:09:00.

Shortly after that I was in Norwich share some new work for Love Light Norwich. I shared a new video work, Let’s Never Meet.

I did a couple of blog posts detailing how I made both the audio and some of the visuals.

March

On 5th March I had the honour of performing at the Algorave at Cafe Oto. I was really nervous as I was making music, not visuals. By this stage I had performed music live a handful of times in venues and online. To then perform at this prestigious venue was daunting but in the end it pushed me to learn and practice more. Here’s a recording of the performance.

Little did I know that this would be my last performance in a venue this year.

On 19th March the year of live streams started. The Eulerroom Equinox took place over three days and featured performances from myself and Alex McLean and one of my favourite performances from myself and mxwx:

This event had been in planning since late 2019 but I think it took on new relevance with the whole world now moving online.

Also in this month I did live streams with Echo Juliet and published a lot of blog posts on (mis)using FFmpeg’s motion interpolation commands. To gather all of the findings together I melted a cat:

April

Online group exhibitions and performances dominate my activities from April onward. On of the first was the Well Now WTF? exhibition which launched on April 4th. This exhibition featured over 140 exhibiting gifs and videos that raised the question of what should/can we do now that everything is cancelled. I contributed a gif in the “Wash Your Fucking Hands” room reflecting on the collective loneliness that comes from online parties.

I did a couple more online live coding events, including a performance with Yaxu for Graham Dunning’s Noise Quest series and a performance for Open Data Institute where we got cut off half way through, possibly for copyright violation! Another sign of things to come.

Also In April I did an overview of the Design Yourself project I ran with Barbican is 2019. Working with a select group of their Young Creatives we created artwork that asked what it meant to be human in an age of technology. One of the participants, Tice Cin, wrote a really good summary of the programme. Here’s one of my favourite videos:

May

Live streams this month included performances with Yaxu on a Cyberyacht(!) (from 32:00) and a performance for Github (better quality version here).

As part of the Well Now WTF? exhibition I presented Gifhouseparty, a lockdown party for all the gifs stuck at home. The music was all live coded and features music/code from me and mxwx, and also gifs of people you may recognise.

Perhaps the biggest event of this month was the opening of the Copy Paste exhibition on 22nd May at Piksel in Bergen, Norway. As Curator I had been planning this exhibition for over a year. I had fully expected this exhibition to not go ahead but the lockdown situation in Bergen at the time allowed for events to still go ahead and so it went along, just without me there. A carefully curated online component was added to allow some of the works to be enjoyed online.

I’m of course thankful to Piksel for their work in allowing the exhibition to go ahead, but I still can’t help but feel sad that I wasn’t able to be there to see it in person!

Other events this month include another performance with Yaxu for the Copy Paste exhibition, a presentation and discussion about copyright/copyleft at Photographer’s Gallery and a performance and presentation at Art Meets Radical Openness. The presentation, called Sorry About That, was about the role that copyright plays in online streaming.

You can watch the presentation here (from 01:40:00), or listen to a rebroadcast of the talk that happened on Radio FRO in July (from around 21:20).

June

This month was kinda quiet. The Copy Paste exhibition continued with events including a presentation from Constant and a workshop from Duncan Poulton. With my skills in audio production getting better I decided to revisit the Wonderland video I made for the Wonder exhibition in 2019 and add a soundtrack.

July

I did visuals for a mix from Reprezent Radio for Late at Tate Online on 17th July. The video’s no longer online so have a couple of gifs!

On 18th July I did two performances in one day! The first was for Oxidize Global and then later me and mxwx collaborated again for a performance at Network Music Festival. Sadly there’s no recordings of either performance but there will hopefully be rerecordings of the music at some point.

Elsewhere in this month I was interviewed by Thisandthatzine and also did a self portrait for it.

Click for larger version

August

The collaboration between me and mxwx finally got a name! We’re now known as Bad Circulation and you can find our music here. At the moment it’s just live recordings and rehearsals. We’re working on an EP. In the meantime here’s one of my favourite recordings.

I was also on the selection panel for Hyperlink from Test Card. Congrats to those that were successful!

September

The online component of Copy Paste was included in Ars Electronica. This included the online exhibition as well as a Curator’s tour, an rebroadcast of Constant’s presentation and the performance from me and Yaxu.

I also published a blog post about it being 10 years since the first GLI.TC/H happened in Chicago. It had quite an impact on me in many ways so I felt it right to mark the occasion somehow.

I was also on the selection panel for the Jerwood Arts / FACT Digital Fellowship. I’m intrigued to see what the three selected artists will create next year!

All the way back in February I was on the selection panel for Ten Acres of Sound, “a festival of noise, sound, sonic art, music, performance, whatever located within Stirchley, Birmingham”. I’m glad it managed to happen as it was postponed from earlier in the year.

October

Back in July I was undertaking a “Stay at Home” residency with New Art Gallery Walsall:

In response to the Coronavirus pandemic, The New Art Gallery Walsall initiated a series of remote residencies to support artists to produce work from their homes. Departing from the Gallery’s usual emphasis on making and sharing work within the context of the Gallery’s purpose-built studio space, artists were encouraged to find creative approaches to developing their practice amid imposed national restrictions and, in particular, to explore the benefits and possibilities of engaging with an online audience.

I challenged myself to learn more about film making and make a video using only what I already have at home. Here’s my video called Windows Explorer:

It was a big challenge and I wrote three blog posts detailing each challenge.

I took part in another online group exhibition (this time featuring 50 artists) called The Archive to Come. For this I made a gif/video reflecting on the tearing down of statues and the Black Lives Matter protests. Here’s a lower resolution gif version:

A better quality video can be seen here and you should check out all of the works in the exhibition.

I also (finally) took part in DA Z. This event was cancelled back in March as was a related event in September, and though I wasn’t able to be physically present in Switzerland I was still happy to be part of it.

November

November was unusually busy. Since July I was working behind the scenes with Open Data Institute to curate Rules of Engagement, an online programme of artworks that make a case for ethical practices when working with data.

The commissioned artists were Nick Briz, Everest Pipkin, and A.M. Darke. The artworks were launched at ODI’s annual Summit and are still available online to view now. It was a lot of work to get the programme together but it was a pleasure to commission new work from some great artists!

You can hear myself, Nick Briz and ODI’s Hannah Redler-Hawes talk about the programme on the TECHnique podcast.

The next day on 11th November I presented new work as part of the Peer to Peer online exhibition. I was one of the three UK commissioned artists and created a piece called Nodes.

It’s the first time I’ve been commissioned to make a piece of music (I did make the visuals as well though) and I really enjoyed making it.

Sticking with music, in November the Compassion Through Algorithms Vol. II compilation was released. The compilation is raising funds for Young Minds Together and was created in response the Black Lives Matter protests, and the general recognition that live coding/electronic music is still heavily dominated by White men. I made a track for it called Pulse.

I also did a short blog post about how I made it. It’s still on sale so go buy it!

I provided a screensaver for the The Idle Index online exhibition from Phoenix Leicester. It’s delivered via a browser extension which you can install in Chrome.

I also took part in Abuja Art Week‘s digital exhibition with two existing videos, Visually Similar and Abundant Antiques.

Back in September I was a judge for the second year running for the Koestler Arts Digital Art category. In November their annual exhibition, this time called No Lockdown of the Imagination launched. Lockdown prevented me from seeing the works in London in person but they have an app you can use to view all of the works.

In other selection panel/judging activities, I was on the selection panel for the MADE IT graduate exhibition which features around 50 artists. The selection process took place between September – October but the online exhibition launched in November. Congrats to all those selected!

December

A fairly quiet month. From 7th – I did a takeover of the Minorities in STEM twitter account. Each week on that account a different person talks about their experiences of being, well, a minority working in STEM (Science, Technology, Engineering, Maths). Though my work does use all of those there’s also the Art side (sometimes called STEAM) so I used some of 3700 words to talk about how al of these overlap. I also talked about how in my experiences of learning about digital art there was never any talk of Black people or anyone other than mainly White men. Things have gotten better since I was in education but there’s still so much work to do to recognise the contributions of Black people in (digital) art. You can read each of the daily threads here:

  • Monday – how I got into working in art+tech, with a focus on setting up the fizzPOP makerspace
  • Tuesday – glitch art and early experiments in making generative art using code
  • Wednesday – realising that the history of Black people working in art + technology is often overlooked
  • Thursday – demonstrated live coding and talked about Algorave
  • Friday – covered a handful of the organisations in the UK that are helping to make art and technology more diverse

I ended the year with a performance at the Eulerroom Winter Solstice. I combined live coding using Tidal Cycles with a couple of Korg Volca synths. No video yet but I’ll update when it becomes available.

end

And so ends a crappy year. That sense of community from being part of group exhibitions and performances definitely helped keep me sane and connected but I really need human contact again. Anything that isn’t a Zoom window… I of course hope that 2021 will be better, but I think we’ll need to fight to keep our galleries, museums, venues and other institutions open. Time and time again our government has shown that they don’t value the arts, and I fear that so many of the places I love will be lost next year. Did I also mention that there’s a pandemic still going on?

Laters 2020!

Making Pulse

On November 6th the Compassion Through Algorithms Vol. II compilation was released, raising money for Young Minds Together . The compilation is still available, and of course you can donate directly to Young Minds Together if you prefer.

In this blog post I’ll be going over how I made my track, Pulse.

I’m two years into making music and I’ve recently become more comfortable and confident in my processes. I’ve gotten over the technological hurdles and, having experimented in making music/sounds of different styles both in private and at Algoraves, I feel I’ve found a range of styles that I like making music in. In the live coding music world some of my biggest influences have been eye measure, Miri Kat, Yaxu, and Heavy Lifting. Their work spans many genres but what I’m drawn to in their music is the more sparse, ambient and even sometimes aggressive sounds. I tried to keep this in mind when making Pulse.

As with most things I make I started first by just experimenting. I can’t fully remember my thought process but at some point I landed on turning a kick drum (“bd” in Tidal) sound from a percussive to a pitched instrument. I achieved this by triggering the sample many times in quick succession and playing with the speed in which it was played back.

setcps (135/60/4)

d1 
$ sound "bd*4"

d7
$ loopAt "0.75 0.25"
$ chunk 4 (slow "2 4")
$ sound "bd*<32 16> kick*32"
# speed (fast "0.25 1" $ range 4 "<0.25 0.75 0.5>" $ saw)
# accelerate "3 2 0"
# gain 0.75
# pan (slow 1.2 $ sine)

I like the piercing buzzing nature of the sound and so decided to focus on building the track around this. Next I had to get the tempo right. By default Tidal runs at 135 bpm (0.5625 cps). Running that code at 135 bpm felt way too fast and so I tried bringing it down to 99 bpm.

It’s no longer at a speed to dance to but makes for better listening. It also meant I could more accurately identify what note the buzzing sound was at. The loopAt command affects the pitch of the samples and it is itself affected by the tempo that Tidal is running at, so setting it at 99 bpm (setcps (135/60/4)) revealed that the buzzing sound was at a G-sharp. It’s probably still a little bit out of tune but it’s close enough!

In late August I bought + was given the Volca Bass and the Volca FM synths. By this time I had been using bass samples in this track but saw this as an opportunity to give these newly acquired synths a try! The Tidal website has instructions on setting up midi, which worked well. One issue was that I was using two of the same usb-to-midi adaptors. On the surface this isn’t an issue, but, at least according to the midi Tidal instructions, when adding a midi device it does so by name and not by any sort of unique ID. Running MidiClient.init: with both adaptors connected gave me this:

MIDIEndPoint("USB MIDI Interface", "USB MIDI Interface MIDI 1")
MIDIEndPoint("USB MIDI Interface", "USB MIDI Interface MIDI 1")

I didn’t know which of the two adaptors Tidal was going to send midi messages to and so no idea which synth would be triggered! Fortunately Alex McLean was on hand to provide a (linux-specific) solution. The dummy Midi Through Port-0 port exists by default and so Alex suggested adding another one. I’ll quote from Alex from the Toplat chat:

if you add options snd-seq-dummy ports=2 (or more) to /etc/modprobe.d/alsa-base.conf
you’ll get two of them
the other being
Midi Through Port-1
obvs
then you can tell supercollider/superdirt to connect to them
then go into qjackctl and the alsa tab under ‘connect’ to connect from the midi through ports to the hardware ports you want
then you can make them connect automatically with the qjackctl patchbay or session thingie
I like doing it this way because it means I can just start supercollider+superdirt then change round which midi device I’m using super easily.. plugging/unplugging without having to restart superdirt
I don’t know if this will solve the problem of having two devices with the same name but hopefully..

With that all fixed I recorded my track! Here’s a live recording of me, um, recording it. It is made using Tidal, the code is just on a screen out of shot.

As you may have noticed there’s some latency on the Volca bass. I should have adjusted the latency in Jack to account for this but at the time didn’t realise that I could do this or even how to do it. However, I was recording the Volca Bass and FM onto separate tracks in Ardour so I was able to compensate for the latency afterwards.

On reflection I should have recorded each orbit (d1, d2 etc) into separate tracks. At the time I didn’t realise I could do this but it’s pretty simple withclear instructions located on the Tidal website, and there’s friendly people on the Toplap chat who helped me. This would allow me to do additional mixing once it was recorded (my Tidal stuff is typically way too loud). Aside from those observations I’m really happy with how it sounds! I’ve shared my code below, which may be useful to study but of course you’ll need Volca’s/midi devices to fully reproduce it.

setcps (99/60/4)

d1 -- volca fm
$ off 0.25 ((fast "2") . (|+ note "12 7"))
$ note "gs4'maj'7 ~"
# s "midi1"

d6
$ stack [
sound "kick:12(5,8) kick:12(3,<8 4>)",
sound "sd:2",
stutWith 2 (1/8) ((fast 2) . (# gain 0.75)) $ sound "hh9*4",
sound "bd*16" # speed 2 # vowel "i"
]

d4 -- volca bass
$ fast 2
$ stutWith 2 (1/4) ((|+ note "24") . (slow 2))
$ note "~ ~ ~ gs2*2"
# s "midi2"

d7
$ loopAt "0.75 0.25"
$ chunk 4 (slow "2 4")
$ sound "bd*<32 16> kick*32"
# speed (fast "0.25 1" $ range 4 "<0.25 0.75 0.5>" $ saw)
# accelerate "3 2 0"
# gain 0.75
# pan (slow 1.2 $ sine)

d2 -- transpose volca fm
$ segment 32
$ ccv 50
$ ccv (range 10 (irand 10+60) $ slow "8 3 7 3 1" $ sine )
# ccn "40"
# s "midi1"

If you enjoyed my track or any of the others on the compilation please consider buying the compilation or making a donation to Young Minds Together and help the fight against racial injustice.

Coder Beatz

Happy to be working with Birmingham Open Media to deliver Coder Beatz, a creative digital programme focusing on live coding for young black kids in the West Midlands.

Coder Beatz a new creative digital programme for young black kids aged between 11-15 years old.
We are running 4 monthly Coder Beatz workshops between November 2020 and February 2021. In each session we will be teaching kids how to create digital music and visuals using live coding and algorithms. The sessions will be delivered by Antonio Roberts who is a renowned digital artist and expert coder. Being a man of colour, Antonio is really passionate about inspiring young black kids to get skilled up on coding music and visuals.

Kids will not need any music or tech experience, and we will provide laptops and headphones for them at BOM’s art center.

Over four sessions I’ll be teaching how to use TidalCycles for making music and Improviz for making visuals. All of the details, including sign up details, can be found by contacting Birmingham Open Media.

On a personal level I’m really happy to be delivering this programme because during the six-ish years I’ve been live coding at Algoraves I’ve noticed that the scene is very good at addressing gender inequalities but, at least in the UK scene, it’s still very white (which could probably be said of electronic music more generally).

Through delivering the programme I hope to demonstrate the creative possibilities of programming and, while I don’t expect those who take part to become fully fledged Algoraves, I do hope it encourages them to explore ways of making digital music and art beyond the “standard” ways of using tools like Ableton and Adobe software.

I also recognise that there are other issues that need to be addressed to make live coding more diverse. For example, encouraging more black people to build live coding tools, recognising and celebrating the impact black culture has had on digital art/music… And I hope this is part of that process.

Please get in touch with BOM if you’re interested or know anyone who would be great for this!

The Stay at Home Residency – part 3

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

In the second blog post I looked at how I approached filming. In this third and final blog post I’ll be detailing my sound making process and sharing the finished film.

The next stage in making this film was working on the sound. As you can hear in a couple of the clips in the previous blog post the area that I live in is really really quiet! Everyone in the local area was using the Summer time to sit outside bathing in the sunlight. Was very relaxing for sure but recordings of the ambient background noise didn’t make for an interesting soundtrack. There was once the sound of a wood chipper but otherwise it was mostly silent. At times me playing music was the loudest sound!

Instead I took to making recordings from within the home. This process made very aware of the variety, and at times lack thereof, of sounds in my home environment. There’s lots of shuffling, tapping, television and dampened thud sounds. With the exception of the television, the place with the most variety of sounds is most definitely the kitchen and so most sounds I used came from there. There’s sounds of glass, metal, wood, and water and even from inside the fridge!

If you’ve been following any of my work for a while you’ll see that I’ve done a lot of live coding performances over the last two years. I like the liveness of this process and so chose to incorporate it into my sound making process. I took the samples that I recorded into TidalCycles and got coding! Here’s some of the recordings along with variations on the code that created them.

setcps(50/60/4)

d1
$ sometimes (fast 2)
$ whenmod 8 6 (# speed 0.5)
$ slow "4 2? 1"
$ sometimes (# accelerate "-0.05 0 0.02")
$ loopAt "1 0.25?"
$ stutWith 4 (1/8) (# speed 1.25)
$ sound "bowl*<1.5 2 1> blinds*<1 2>"
# n (irand 3)
d2
$ sometimes (fast 1.35)
$ striate "2 4 8"
$ stutWith "8 2 1" (1/16) (# speed (irand 3-1))
$ sound "droplet*4"
d3
$ every 7 (# speed "0.5")
$ slow 4
$ sometimes (striate "8")
$ stutWith 8 (1/8) (soak 4 (|+ speed 0.15))
$ juxBy (slow 3 $ sine) ((# speed 2) . (# accelerate "-1"))
$ sound "stackingplates*2 [whack(3,9)]"
# n "1 2"
# pan (perlin)
d4
$ hurry "2 1 4 8"
$ sound "whack*4"

Although not the same as the drone soundscapes that Rodell Warner creates I thought they provided a lot of texture and would work well as an accompaniment to a drone soundscape. For that I loaded up Ardour and the Helm synthesiser.

The process of making and putting together all of these separate parts was in no way linear. The tutorials I followed all recommended writing a script or having a plan and I certainly didn’t have either. For this exploratory stage of my journey into film making I think that was mostly ok but for anything in the future I would at least consider what kind of atmosphere, emotions, or general message I wanted to convey.

The actual editing process was a big chore. Open source video editing software on Linux still leaves a lot to be desired. Despite there being a number of video editors available nearly all of them have one failing in common: stability. With just a few HD resolution clips and no effects or transitions I was experiencing a lot of stuttering during seeking and playback and crashes when rendering. This, of course, caused a lot of frustration and definitely resulted in me spending less time editing than I would have liked to. For recent videos I’ve used Olive which has worked really well – seeking on the timeline is fast and there are few crashes – but at the time of editing version 0.2 was still too unstable to be usable.

After that last hurdle I feel I have produced a film that demonstrates a lot of what I’ve learnt.

The film, titled Windows Explorer, represents my desire to be out in the world again. Like pretty much everyone my world has shrunk and my engagement with the world comes from looking out of and into various windows, whether that be out of my office window or into a Zoom, Skype, Teams, Jitsi or whatever window.

With Thanks

This residency was certainly a big earning experience. In a conversation with the curators at the gallery I expressed concern that I wasn’t making enough, or that everything that I was making was, well, crap in comparison to the digital art portfolio that I’ve built up over the last decade. They reassured me that I was trying something new and so I can’t be expected to be immediately great at it. Even if I was in a situation where I had access to a team and equipment, a month isn’t really a long time to fully learn a new skill and make a complete piece of work using that skill. This really helped to put into context that this residency was time for me to reflect on my practice and to learn at my own pace.

From this residency I feel a lot more prepared to make narrative film, even if it’s a 1-minute film. I’ve already upgraded my equipment in preparation for future projects and have more knowledge of the multi-level process that goes into making a film.

Many thanks to The New Art Gallery Walsall for this opportunity 🙂

The Stay at Home Residency – part 2

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

In the first blog post I looked at my influences and research carried out before I started making work. In this second blog post I’ll be showing some of the filming I did.

With the research conducted and panic now over I started filming again. I began by filming various things in my home. I tried to focus on shots that would have some movement in them, even if it were only background movement. Because of this most of my shots look out of a window. Although the background is blurred whatever movement there is – be it the trees, people, or lights turning on/off – makes the still shot that little bit more interesting.

Next, I decided to bring out my projector and see what I could do with it. By now my projector is at least seven years old (I originally purchased it for a BYOB event in 2013) and so not only is the projection quality quite poor, there are glitchy lines running through the the projection.

I had thought about making animations to project onto various objects, but I didn’t want to turn this into an animation project. I’ve long used my Glass video when experimenting with projections and I liked how to made any surface it landed on just way more interesting. To replicate this saturation of glitchy colour and movement I installed a copy of waaave_pool onto a Raspberry Pi, connected a webcam to it and pointed the webcam at random surfaces in the room.

Video waves itself is a bit like a video synthesiser, working primarily with webcam video input. With that installed I made some things like this:

I liked these projection experiments most when they were really subtle. I didn’t want the projection to overpower the surface and render it invisible or irrelevant. For example, in one experiment I projected onto cushions, which looked really great but the cushions got lost behind the projections.

I also played with a strip of LED lights I had from a previous project. They can be programmed to to flash quickly but they seemed to work best when they were pulsating slowly, which very much matched the pace of the shots I had filmed so far.

In the next blog post I’ll be detailing how I made sounds for the film and sharing the finished film.

The Stay at Home Residency – part 1

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

The New Art Gallery Walsall has adapted its Studio residency programme in the wake of the Coronavirus pandemic to support three artists based in the West Midlands to produce work from their homes between May and July this year.

Following an open-call to artists based in the West Midlands, the Gallery received 60 varied proposals from a diverse range of artists working across the region. The many challenges that artists are facing during lockdown were well articulated. In selecting, we were keen to see opportunities for artistic and professional development during these challenging times, to support creative approaches to practice amid imposed restrictions and to explore the benefits and possibilities of sharing with an online audience.

It’s been some months since the residency ended and I really learned a lot. In this three-part blog post series I’ll be talking a bit about the month of learning and creating, the struggles I had, what I did to overcome them, and some of my thoughts on the final outcome. In this first blog post I’ll be going over my research and influences.

My reason for doing the residency was to explore ways of making work without a computer. Quoting from my application:

Creating my digital video works is a very stationary process, requiring me to spend long hours sat in my home office at my desk on the computer. I have long had a desire to step away from the desk and learn film and sound production techniques. I already own much of the required equipment including a DSLR camera, microphone and tripod. I have mainly used these to document events or exhibitions.

This residency would grant me the opportunity to step into learning film production techniques. I will study available materials (digital books and tutorial videos) and implement what I learn when creating the films.

Looking back over the last 10 years of my practice I have noticed that most of my work has been computer generated videos and animation.

Loud Tate: Code

Most of these works are generative and, much like animated gifs, they don’t have an extensive narrative and are best viewed on repeat. This isn’t a downside to the works, but making something with a narrative using filmed footage was definitely of interest to me for this residency.

I began the residency exploring the technical processes involved in film making. I have used cameras for a long time but often I don’t explore their full capabilities. I usually just leave the settings on Auto and most of the time it works out fine! This is similar for lenses. The camera I owned at the time of the residency was a Olympus Pen F together with a 45mm and 17mm lenses. I only ever really understood that the former is good for portraits and the latter for landscapes/outdoor but still didn’t understand why.

I wanted to understand this and more so spent a lot of time watching videos and reading tutorials. Two really interesting videos were The Changing Shape of Cinema: The History of Aspect Ratio and The Properties of Camera Lenses from Filmmaker IQ.

These two videos, and the many others I watched late one evening, went into far more detail than I needed about film, the history of cinema, and equipment. I also didn’t own 99% of the equipment and resources the videos mention, but it was really interesting to know how all those things go into making a film and achieving a certain cinematic look.

The next set of videos that was really insightful was the Crash Course Film Production series of videos. The Filmmaker IQ videos focused on specific details about film making whereas these videos were perhaps more relevant to me as they were produced from the viewpoint of someone with no knowledge wanting to know what goes into making a film. The third video in particular, The Filmmaker’s Army,is particularly enlightening as it explains a lot of the roles in a film production and how each work together to make a finished film.

One of the main things I took from watching this series of videos is that there is a lot of planning that goes into a film. Depending on the scale of the project the time between writing a script and filming can be years! And when on a film set a lot of the roles are there to ensure each person is doing the correct things at the right time.

Although all of this was really exciting and inspiring to learn at the beginning of the residency there was one big problem: Almost all of it would not be applicable to me at this time. Quoting my application:

Using tools and materials I have in my home – which include programmable lights, a projector, screens, and other electronics – I want to create a series of short abstract films that explore the use digital art, light, and projection to illuminate my home and immediate surroundings. The everyday objects in the home, the grass outside, the brickwork and more will act as both creative material and canvas for abstract projections.

I was strict in my desire to create a film only within the home. This meant that I couldn’t acquire stage lights, microphones or other equipment. I had to use whatever I had in whatever filming conditions I was given. Still, These restrictions could hopefully provide inspiration.

Early on I struggled to make anything interesting. I filmed whatever I could find in my home but it was all very static and at times boring. It was then that I realised that the domestic environment, especially during lockdown, is a pretty boring place! In my household there are only two people and the environment doesn’t change that much. It’s not like the outdoors where the environment changes, or like a gallery space which can reconfigured and has access to lots of equipment. In short, everything is just static. I was very worried that whatever I made would be very boring to watch.

I started to look to other films and artists for inspiration. I was browsing Mubi one day and saw a movie called Villa Empain by Katharina Kastner. I had no idea what it was about at the time but it was short and gave me a distraction from the panicking!

It turned out to be exactly the kind of film I needed to see. To me it was a series of animated portraits of the Villa Empain building. A lot of the shots in the film were static, featuring minimal movement from the pool water, trees, or sun shining through the stained glass windows. It was quite a meditative film. It helped to show me that a film didn’t need to be action packed to be interesting.

I also remembered the work of Rodell Warner (having first seen their work in 2019 at bcc:). In his Augmented Archive series he’ll take an archive picture, add a drone soundtrack to it and animate it using a flickering effect (plus his own 3D sculptures). Of course there is a much deeper concept than my very technical description (and you should see more of his work to understand), but seeing his work showed me that there are ways to add depth and movement to static imagery.

In the next blog post I’ll be detailing the process of filming shots.