Controlling Improviz Using Midi via OSC

In 2020 I did quite a number of workshops in using the Improviz visuals live coding environment. Improviz can be thought of as a fork of Livecodelab, especially as its developer, Guy John, is one the developers of Livecodelab. However, it has some key differences that make it stand out as its own unique software:

  • It works on the desktop, and I think it is faster because of it
  • The language is more fully documented
  • You can load your own textures, gifs, 3D models, and shaders

Being able to load your own textures might in itself be a reason for many people to switch from Livecodelab to Improviz. Things can be that just a bit more personalised when you’re using your own images and objects rather than only colours, gradients and basic geometrical shapes. Another potentially useful difference is that in Improviz you can interface with the software using Open Sound Control (OSC). This opens up the possibility of using software or external hardware devices. In this blog post I’ll take you through how you can connect a midi controller to Improviz via OSC and Pure Data.

To get started you first need to define a variable in Improviz that you want to be changed by OSC/midi. The name of this variable can be anything as long as it’s not a name already used as a function or variable in Improviz. Check the reference page for a list of reserved names. In my example I’ve used the variable name size.

size = ext(:size, 1)

Next, we need to connect to it via osc so that we can change its value.

When you launch Improviz via the terminal one of the messages you’ll see printed is the port it is using for sending message over OSC.

2021-03-25 20:53:.732595  INFO: Running at 640 by 480
2021-03-25 20:53:.732733  INFO: Framebuffer 640 by 480
2021-03-25 20:53:.390032  INFO: Loaded 3 texture files
2021-03-25 20:53:.437047  INFO: Loaded 8 material files
2021-03-25 20:53:.441641  INFO: Loaded 5 geometry files
2021-03-25 20:53:.441718  INFO: *****************************
2021-03-25 20:53:.441766  INFO: Creating Improviz Environment
2021-03-25 20:53:.466755  INFO: Loading ./stdlib/variables.pz
2021-03-25 20:53:.466846  INFO: Loading ./stdlib/transformations.pz
2021-03-25 20:53:.466890  INFO: Loading ./stdlib/shapes.pz
2021-03-25 20:53:.466930  INFO: Loading ./stdlib/style.pz
2021-03-25 20:53:.466968  INFO: Loading ./stdlib/textures.pz
2021-03-25 20:53:.467004  INFO: Loading ./stdlib/screen.pz
2021-03-25 20:53:.467039  INFO: Loading ./usercode/grid.pz
2021-03-25 20:53:.467078  INFO: Loading ./usercode/seq.pz
2021-03-25 20:53:.467116  INFO: Improviz OSC server listening on port 5510
2021-03-25 20:53:.467297  INFO: Improviz HTTP server listening on port 3000
2021-03-25 20:53:.467405  INFO: Improviz resolution: 640 by 480

Of course you can, at this stage, use any software that can send data over OSC, but for this blog post/tutorial I’ll be using Pure Data. Alternatives exist but I like using it as it’s lightweight, stable and is cross platform.

To send OSC messages use the [netsend] object to connect to the same ip address as Improviz (usually 127.0.0.0) and same port (5510). [udpsend] will output a 1 from its only outlet to show a successful connection. With the connection established I can now send values from a number box to the variable via OSC!

Right now I’m using number box which has its values being set by me manually clicking and dragging. I could have the numbers being generated randomly by using the [random] object, or even have some level of audio reactivity by using the [adc] object. If that’s your thing you do it! Keeping to this blog post’s title I’ll be using a midi controller to change these values. For this next stage you should know that I’m using Ubuntu (20.10) as my operating system. This means that the instructions, especially those concerning connecting a midi controller, may be different for your operating system. Sadly I can’t help with that.

Connecting a midi controller to Pure Data is quite easy. I’m using an Akai MPK Mini MKII, but the instructions on connecting the controller are the same for pretty much any midi controller. First make sure that Pure Data is exposing at least one midi port. Change your midi backend to ALSA-MIDI in Media > ALSA-MIDI. Then go to Media > MIDI Settings… and make sure you have at least one midi input.

Then, open QjackCtl, click on the Connect button and under the ALSA tab connect the MPK Mini Mk II output port to the input port of Pure Data.

In Pure Data you can now read the Control Change (CC) values of a one of the knobs or pads using the [ctlin] object. On my MPK the first dial (K1) is [ctlin 1]. It outputs values from 0 – 127 (128 values). I want it to change the size of a cube from 0 – 4, so I need to map the ranges. I found this very handy mapping abstraction so I’ll be using that. With the ranges mapped I can use the knob on my controller to change the size!


Pure Data patch in Improviz code is here: pd_improviz_4.zip

For my next trick I want one octave, C5 to G5, to alter the shades of grey of the cube. The [notein] object will tell me the current midi number of the key being pressed. From that I can deduce that C5 to G5 is midi notes 48 – 59. Using the [maxlib/scale] object again I can map those ranges to 0 – 256 and send those values over OSC to a variable in Improviz that will be used to change the fill function.


Pure Data patch in Improviz code is here: pd_improviz_5.zip

For my final form I’ll use one of the pads on the midi controller to toggle a random colour generator.


Pure Data patch in Improviz code is here: pd_improviz_6.zip

One of the possibilities of using a midi controller to control visuals in this way is that you can control the audio and visuals simultaneously, rather than one being triggered in reaction to the other. In my experience of doing live visuals it has been quite normal for visuals to move or, as is quite often the case, pulsate in reaction to the amplitude of the music. In fact I did this many years ago for a video for My Panda Shall Fly.

What I’ve sometimes noticed is that there’s latency and the reactive visuals often feel like they’re coming in too late after the beat/instrument has hit. Of course the latency can be reduced by adjusting the sensitivity of the audio input device (microphone or line in) but then it’s a fine balancing act of both the musician and visualist adjusting levels. Achievable but a pain!

By having one device/controller triggering both you can, in theory, have both happen simultaneously. Here’s a demonstration of this from October 2020

As you can see the midi controller is controlling both the visuals and the audio. When I eventually get back to performing live gigs this is definitely something I’m going to explore further. Until then, have fun mixing live coding with midi controllers!

hydra meetup #5

Thank you for joining the past hydra meetups, and we are excited to announce the 5th edition!

Presentations
hellocatfood
Melanie Wilson
Jamie Faye Fenton

(Algo|Afro) Futures

I’m happy to launch (Algo|Afro) Futures, a mentoring programme for early career Black artists in the West Midlands who want to explore the creative potential of live coding.

Live coding is a performative practice where artists and musicians use code to create live music and live visuals. This is often done at electronic dance music events called Algoraves, but live coding is a technique rather than a genre, and has also been applied to noise music, choreography, live cinema, and many other time-based artforms.

(Algo|Afro) Futures will take place between April – June online and at Vivid Projects and will consist of four sessions. Dates will be confirmed in response to lockdown restrictions and participant availability.

Algorave Birmingham

Four participants will receive mentorship from myself and Alex McLean on all things live coding. Each participant will receive a fee of £100 per mentoring session attended plus reasonable travel expenses.

This opportunity is open for Black West Midlands-based artists only. The call is open now until 23:59 GMT on 14th March . Further information about the programme, FAQs and the application form can be found at the (Algo|Afro) Futures website.

Late at the Library: Algorave

(Algo|Afro) Futures is organised with FoAM Kernow and Vivid Projects, in collaboration with and funded by the UKRI research project “Music and the Internet: Towards a Digital Sociology of Music

Making Pulse

On November 6th the Compassion Through Algorithms Vol. II compilation was released, raising money for Young Minds Together . The compilation is still available, and of course you can donate directly to Young Minds Together if you prefer.

In this blog post I’ll be going over how I made my track, Pulse.

I’m two years into making music and I’ve recently become more comfortable and confident in my processes. I’ve gotten over the technological hurdles and, having experimented in making music/sounds of different styles both in private and at Algoraves, I feel I’ve found a range of styles that I like making music in. In the live coding music world some of my biggest influences have been eye measure, Miri Kat, Yaxu, and Heavy Lifting. Their work spans many genres but what I’m drawn to in their music is the more sparse, ambient and even sometimes aggressive sounds. I tried to keep this in mind when making Pulse.

As with most things I make I started first by just experimenting. I can’t fully remember my thought process but at some point I landed on turning a kick drum (“bd” in Tidal) sound from a percussive to a pitched instrument. I achieved this by triggering the sample many times in quick succession and playing with the speed in which it was played back.

setcps (135/60/4)

d1 
$ sound "bd*4"

d7
$ loopAt "0.75 0.25"
$ chunk 4 (slow "2 4")
$ sound "bd*<32 16> kick*32"
# speed (fast "0.25 1" $ range 4 "<0.25 0.75 0.5>" $ saw)
# accelerate "3 2 0"
# gain 0.75
# pan (slow 1.2 $ sine)

I like the piercing buzzing nature of the sound and so decided to focus on building the track around this. Next I had to get the tempo right. By default Tidal runs at 135 bpm (0.5625 cps). Running that code at 135 bpm felt way too fast and so I tried bringing it down to 99 bpm.

It’s no longer at a speed to dance to but makes for better listening. It also meant I could more accurately identify what note the buzzing sound was at. The loopAt command affects the pitch of the samples and it is itself affected by the tempo that Tidal is running at, so setting it at 99 bpm (setcps (135/60/4)) revealed that the buzzing sound was at a G-sharp. It’s probably still a little bit out of tune but it’s close enough!

In late August I bought + was given the Volca Bass and the Volca FM synths. By this time I had been using bass samples in this track but saw this as an opportunity to give these newly acquired synths a try! The Tidal website has instructions on setting up midi, which worked well. One issue was that I was using two of the same usb-to-midi adaptors. On the surface this isn’t an issue, but, at least according to the midi Tidal instructions, when adding a midi device it does so by name and not by any sort of unique ID. Running MidiClient.init: with both adaptors connected gave me this:

MIDIEndPoint("USB MIDI Interface", "USB MIDI Interface MIDI 1")
MIDIEndPoint("USB MIDI Interface", "USB MIDI Interface MIDI 1")

I didn’t know which of the two adaptors Tidal was going to send midi messages to and so no idea which synth would be triggered! Fortunately Alex McLean was on hand to provide a (linux-specific) solution. The dummy Midi Through Port-0 port exists by default and so Alex suggested adding another one. I’ll quote from Alex from the Toplat chat:

if you add options snd-seq-dummy ports=2 (or more) to /etc/modprobe.d/alsa-base.conf
you’ll get two of them
the other being
Midi Through Port-1
obvs
then you can tell supercollider/superdirt to connect to them
then go into qjackctl and the alsa tab under ‘connect’ to connect from the midi through ports to the hardware ports you want
then you can make them connect automatically with the qjackctl patchbay or session thingie
I like doing it this way because it means I can just start supercollider+superdirt then change round which midi device I’m using super easily.. plugging/unplugging without having to restart superdirt
I don’t know if this will solve the problem of having two devices with the same name but hopefully..

With that all fixed I recorded my track! Here’s a live recording of me, um, recording it. It is made using Tidal, the code is just on a screen out of shot.

As you may have noticed there’s some latency on the Volca bass. I should have adjusted the latency in Jack to account for this but at the time didn’t realise that I could do this or even how to do it. However, I was recording the Volca Bass and FM onto separate tracks in Ardour so I was able to compensate for the latency afterwards.

On reflection I should have recorded each orbit (d1, d2 etc) into separate tracks. At the time I didn’t realise I could do this but it’s pretty simple withclear instructions located on the Tidal website, and there’s friendly people on the Toplap chat who helped me. This would allow me to do additional mixing once it was recorded (my Tidal stuff is typically way too loud). Aside from those observations I’m really happy with how it sounds! I’ve shared my code below, which may be useful to study but of course you’ll need Volca’s/midi devices to fully reproduce it.

setcps (99/60/4)

d1 -- volca fm
$ off 0.25 ((fast "2") . (|+ note "12 7"))
$ note "gs4'maj'7 ~"
# s "midi1"

d6
$ stack [
sound "kick:12(5,8) kick:12(3,<8 4>)",
sound "sd:2",
stutWith 2 (1/8) ((fast 2) . (# gain 0.75)) $ sound "hh9*4",
sound "bd*16" # speed 2 # vowel "i"
]

d4 -- volca bass
$ fast 2
$ stutWith 2 (1/4) ((|+ note "24") . (slow 2))
$ note "~ ~ ~ gs2*2"
# s "midi2"

d7
$ loopAt "0.75 0.25"
$ chunk 4 (slow "2 4")
$ sound "bd*<32 16> kick*32"
# speed (fast "0.25 1" $ range 4 "<0.25 0.75 0.5>" $ saw)
# accelerate "3 2 0"
# gain 0.75
# pan (slow 1.2 $ sine)

d2 -- transpose volca fm
$ segment 32
$ ccv 50
$ ccv (range 10 (irand 10+60) $ slow "8 3 7 3 1" $ sine )
# ccn "40"
# s "midi1"

If you enjoyed my track or any of the others on the compilation please consider buying the compilation or making a donation to Young Minds Together and help the fight against racial injustice.

Peer to Peer: UK/HK – 11th – 14th November 2020

From 11th – 1th Novemebr I’ll be presenting new commissioned work as part of the Peer to Peer UK/HK programme.

Peer to Peer: UK/HK is a digital programme and platform encouraging meaningful cultural exchange and forging enduring partnerships between the UK and Hong Kong’s visual arts sectors.

The programme launches with an online festival of international exchange and collaboration taking place 11-14 November.

The Festival will include an online exhibition of digital artworks from UK and Hong Kong based artists, including 5 new commissions by artists nominated by UK and Hong Kong based partners. There will also be a series of digital residencies taking place across partner organisation’s social media channels as well as a set of curated panel discussions.

The Festival is led by Ying Kwok (Festival Director and independent curator, HK), with Lindsay Taylor,  (University of Salford Art Collection), Open Eye Gallery and Centre for Chinese Contemporary Art (CFCCA), supported by a project team.

In the spirit of exchange and collaboration the Festival is piloting a distributed leadership model, involving co-curation and co-production with partner organisations.

The project has been generously supported by funding from Arts Council England and the GREAT campaign.

I’m one of the five commissioned artists, alongside Danielle Brathwaite-Shirley, Hetain Patel, Lee Kai Chung and Sharon Lee Cheuk Wan. My commission will be a live coded audio/visual work which will then enter the University of Salford Art Collection as a permanent legacy of the project. Many thanks to Charlotte Frost from Furtherfield for the nomination!

Live Coding using Improviz

Tickets from £10
Improviz is an environment built by Guy John aka Rumblesan, which can be used for live coding visual performances. Its easy-to-learn language for creating visuals can be extended through the use of custom GL shaders and by using your own GIFs, 3D models and image textures.

This two-hour workshop, led by visual artist Antonio Roberts aka hellocatfood, will introduce you to the world of live coding, and guide you through the basics of using Improviz for live visuals. This workshop will also include a short look at how artists and musicians use code to make visuals and music in real time at Algoraves. If you’ve ever been curious about using code to make live visuals and have aspirations to perform at live coding events and Algoraves then this workshop is perfect for you.

No coding experience is necessary.
You will need:
– A Windows, Mac or Linux laptop that can connect to the internet
Google Chrome or Firefox
Improviz (desktop version)
Atom
Zoom

Ahead of the workshop please follow this guide to make sure you can locate and open Improviz using the terminal/command line on your computer.
Mac OS guide
Linux guide
Windows guide

Coder Beatz

Happy to be working with Birmingham Open Media to deliver Coder Beatz, a creative digital programme focusing on live coding for young black kids in the West Midlands.

Coder Beatz a new creative digital programme for young black kids aged between 11-15 years old.
We are running 4 monthly Coder Beatz workshops between November 2020 and February 2021. In each session we will be teaching kids how to create digital music and visuals using live coding and algorithms. The sessions will be delivered by Antonio Roberts who is a renowned digital artist and expert coder. Being a man of colour, Antonio is really passionate about inspiring young black kids to get skilled up on coding music and visuals.

Kids will not need any music or tech experience, and we will provide laptops and headphones for them at BOM’s art center.

Over four sessions I’ll be teaching how to use TidalCycles for making music and Improviz for making visuals. All of the details, including sign up details, can be found by contacting Birmingham Open Media.

On a personal level I’m really happy to be delivering this programme because during the six-ish years I’ve been live coding at Algoraves I’ve noticed that the scene is very good at addressing gender inequalities but, at least in the UK scene, it’s still very white (which could probably be said of electronic music more generally).

Through delivering the programme I hope to demonstrate the creative possibilities of programming and, while I don’t expect those who take part to become fully fledged Algoraves, I do hope it encourages them to explore ways of making digital music and art beyond the “standard” ways of using tools like Ableton and Adobe software.

I also recognise that there are other issues that need to be addressed to make live coding more diverse. For example, encouraging more black people to build live coding tools, recognising and celebrating the impact black culture has had on digital art/music… And I hope this is part of that process.

Please get in touch with BOM if you’re interested or know anyone who would be great for this!

The Stay at Home Residency – part 3

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

In the second blog post I looked at how I approached filming. In this third and final blog post I’ll be detailing my sound making process and sharing the finished film.

The next stage in making this film was working on the sound. As you can hear in a couple of the clips in the previous blog post the area that I live in is really really quiet! Everyone in the local area was using the Summer time to sit outside bathing in the sunlight. Was very relaxing for sure but recordings of the ambient background noise didn’t make for an interesting soundtrack. There was once the sound of a wood chipper but otherwise it was mostly silent. At times me playing music was the loudest sound!

Instead I took to making recordings from within the home. This process made very aware of the variety, and at times lack thereof, of sounds in my home environment. There’s lots of shuffling, tapping, television and dampened thud sounds. With the exception of the television, the place with the most variety of sounds is most definitely the kitchen and so most sounds I used came from there. There’s sounds of glass, metal, wood, and water and even from inside the fridge!

If you’ve been following any of my work for a while you’ll see that I’ve done a lot of live coding performances over the last two years. I like the liveness of this process and so chose to incorporate it into my sound making process. I took the samples that I recorded into TidalCycles and got coding! Here’s some of the recordings along with variations on the code that created them.

setcps(50/60/4)

d1
$ sometimes (fast 2)
$ whenmod 8 6 (# speed 0.5)
$ slow "4 2? 1"
$ sometimes (# accelerate "-0.05 0 0.02")
$ loopAt "1 0.25?"
$ stutWith 4 (1/8) (# speed 1.25)
$ sound "bowl*<1.5 2 1> blinds*<1 2>"
# n (irand 3)
d2
$ sometimes (fast 1.35)
$ striate "2 4 8"
$ stutWith "8 2 1" (1/16) (# speed (irand 3-1))
$ sound "droplet*4"
d3
$ every 7 (# speed "0.5")
$ slow 4
$ sometimes (striate "8")
$ stutWith 8 (1/8) (soak 4 (|+ speed 0.15))
$ juxBy (slow 3 $ sine) ((# speed 2) . (# accelerate "-1"))
$ sound "stackingplates*2 [whack(3,9)]"
# n "1 2"
# pan (perlin)
d4
$ hurry "2 1 4 8"
$ sound "whack*4"

Although not the same as the drone soundscapes that Rodell Warner creates I thought they provided a lot of texture and would work well as an accompaniment to a drone soundscape. For that I loaded up Ardour and the Helm synthesiser.

The process of making and putting together all of these separate parts was in no way linear. The tutorials I followed all recommended writing a script or having a plan and I certainly didn’t have either. For this exploratory stage of my journey into film making I think that was mostly ok but for anything in the future I would at least consider what kind of atmosphere, emotions, or general message I wanted to convey.

The actual editing process was a big chore. Open source video editing software on Linux still leaves a lot to be desired. Despite there being a number of video editors available nearly all of them have one failing in common: stability. With just a few HD resolution clips and no effects or transitions I was experiencing a lot of stuttering during seeking and playback and crashes when rendering. This, of course, caused a lot of frustration and definitely resulted in me spending less time editing than I would have liked to. For recent videos I’ve used Olive which has worked really well – seeking on the timeline is fast and there are few crashes – but at the time of editing version 0.2 was still too unstable to be usable.

After that last hurdle I feel I have produced a film that demonstrates a lot of what I’ve learnt.

The film, titled Windows Explorer, represents my desire to be out in the world again. Like pretty much everyone my world has shrunk and my engagement with the world comes from looking out of and into various windows, whether that be out of my office window or into a Zoom, Skype, Teams, Jitsi or whatever window.

With Thanks

This residency was certainly a big earning experience. In a conversation with the curators at the gallery I expressed concern that I wasn’t making enough, or that everything that I was making was, well, crap in comparison to the digital art portfolio that I’ve built up over the last decade. They reassured me that I was trying something new and so I can’t be expected to be immediately great at it. Even if I was in a situation where I had access to a team and equipment, a month isn’t really a long time to fully learn a new skill and make a complete piece of work using that skill. This really helped to put into context that this residency was time for me to reflect on my practice and to learn at my own pace.

From this residency I feel a lot more prepared to make narrative film, even if it’s a 1-minute film. I’ve already upgraded my equipment in preparation for future projects and have more knowledge of the multi-level process that goes into making a film.

Many thanks to The New Art Gallery Walsall for this opportunity 🙂

Live Coding Visuals with Antonio Roberts

Improviz is an environment built by Guy John aka Rumblesan, which can be used for live coding visual performances. Its easy-to-learn language for creating visuals can be extended through the use of custom GL shaders and by using your own GIFs, 3D models and image textures.

This two-hour workshop, led by visual artist Antonio Roberts aka hellocatfood, will introduce you to the world of live coding, and guide you through the basics of using Improviz for live visuals.

This workshop will also include a short look at how artists and musicians use code to make visuals and music in real time at Algoraves.

If you’ve ever been curious about using code to make live visuals and have aspirations to perform at live coding events and Algoraves then this workshop is perfect for you.

No coding experience is necessary.

You will need:

* A Windows, Mac or Linux laptop that can connect to the internet

* Google Chrome https://www.google.com/chrome/ or Firefox https://www.mozilla.org/en-GB/firefox/

* Improviz (desktop version) https://improviz.rumblesan.com/

* Atom https://atom.io/

* Zoom https://zoom.us/

SonitusLIVE #10 Electronic Music Livestream

Terrific 10th AND PENULTIMATE edition of this series! A very fine mixture of video art, live coding, handmade electronics and boundary-pushing noises 🧡 from….
ADRIAN HOLDER
TASOS STAMOU
HELLOCATFOOD
NNJA RIOT
TOYOTA VANGELIS

Streaming live to youtube.com/tr-33n (individual artist video URLS to come) with back-up to twitch.tv/sonituslive

NB: For our final 2 editions we’ll be trialling a centralised ‘Pay What You Can’ PayPal pot to be distributed equally between all artists (instead of the usual individual donations) and supporting the promo with a few quid to get the word out.