Thoughts on live coding visuals in Pure Data

I took part in Algorave in Gateshead on 26th April. Apart from being incredibly awesome it was my first time live coding – or rather live patching – visuals in Pure Data from scratch. I emphasise from scratch because nearly all of my performances involve me modifying patches, but never starting with a completely blank canvas. I also occasionally used the HSS3jb as a texture for objects, but never on its own. It’s also great for when crashes occur, which is/was often ;-). Here’s a few sampels of my visuals. Videos by Mariam Rezaei:

I learnt a few things about Pure Data that night, and my general opinion is that it isn’t that great as a live coding visuals tool.

One of first issues is encapsulation of objects. This can be done quite easily but it’s a manual process which would involve cutting all cords and reconstructuring a patch. That is, you would have to cut the selection of objects, paste them into a sub patch and then reattatch it. By way of comparison, Max/MSP has this as a feature already, whereas this isn’t mentioned at all on the bug tracker Feature request is now on the bug tracker. Not being able to auto encapsulate objects makes reuse a bit more difficult and cumbersome, which resulted in some really messy patches from me on the night.

Algorave patches

This also relates to another issue of object insertion. When I was building my patches I would often have to preempt what I would need. I nearly always started with [gemhead]-[translateXYZ]-[rotateXYZ]-[repeat 10]-[rotateXYZ]-[translateXYZ]-[color]-[cube]. Inserting any additional objects required me to cut the cord and therefore the screen output. This would be solved if there were, for example, a method whereby if two objects were selected, the next object was inserted in between them. This is obviously an over-simplified specific use case which would need more thought behind it. Again, no mention of it on the bug tracker. Feature request is now on the bug tracker.

There were other thoughts I had on the night, such as the incosistencies and clumsiness of using the [repeat] object, the lack of a snap-to-grid option for aligning objects, the tiny size of inlets and outlets – even when the ojbects themselves may be huge, which is only exaggerated when using a 13″ 1080p screen, and the lack of a toolbar (yes, I am aware of GUI plugins), but these are the two which I felt would’ve helped me most.

Has much else been written about the use of Pure Data for live coding visuals?

Black Hole Club Social/ Digbeth First Friday – May 2nd

Since March I’ve been part of Vivid Projects’ Black Hole Club along with nine other artists. As part of a new initiative called Digbeth First Friday on 2nd May we’ll be having a bit of a social event!

blackholeclub

The Black Hole Club is a lively, daring space for all kinds of creative people to share ideas. Join us for a social evening of visuals and sonics from club members. Expect beer, music and conversation, which journeys from David Lynch to Oculus Rift!

Digbeth comes alive on the first Friday of each month with exhibitions, late-night openings, special events, culture in unexpected places, live music, street food and more.

With different things to see and do each month, anything could happen on a first Friday night out. Grab a Disloyalty Card from participating venues and collect stamps to trade for treats as you sample the great independent culture Digbeth has to offer.

It’s all happening between 6-9pm. Expect there to be lots of analog video hardware for you to play around with! Oh, and it’s totally free!

spɛl ænd spik

Phonemes are the smallest units of sound in a language. Unlike letters which describe how words are written, phonemes describe how words should be pronounced. There around 44 phonemes in the English language, though this varies with different accents and dialects.

In spɛl ænd spik I hand over the composition of these phonemes to a computer program and text-to-speech software. Unlike the process of haphazardly arranging letters, when phonemes are strung together there is less chance of the result being unpronouncable.

When compososed haphazardly by a computer do these new sounds make sense to human listeners? Can they be mistaken for English? Do changes in the voice, speed, pitch and gender of the computerised voice affect how we interpret these nonsensical sounds? Does the use of a human avatar help our understanding of these sounds as English words?

Download code

https://github.com/hellocatfood/spell-and-speak

spɛl ænd spik was developed for Electronic Voice Phenomena.

spɛl ænd spik uses code by Silas S. Brown.

  • linuxgazette.net/181/brownss.html – Simple lip-sync animations in Linux
  • lexconvert – a converter between the lexicon formats of different speech synthesizers

spɛl ænd spik was developed with programming assitance from Michael Murtaugh and photographic assitance from Pete Ashton.

Dependencies

spɛl ænd spik was developed on Ubuntu 13.10 with the following sofware:

Usage

Take three pictures. One with your mouth closed (1.jpg), one with your mouth partly open (2.jpg) and one with your mouth fully open (3.jpg) (instructions adapted from here). Put these in the same location as this script.

In the terminal run ./spell_and_speak.sh

USA 21-30 March

I payed a mostly social visit to USA from 21-30 March (yes, this was after being there only 10 days earlier for fGlitch), with stops in Chicago, New York and a surprise visit to Miami.

I had so much fun seeing all of the glitch/new media friends I’ve made over the years, making new friends, meeting folks IRL for the first time, getting lost on the seemingly infinite streets and eating quite a lot of food! Next time I visit I’ll try and see more of the West Coast as my last time doing that was in 2006 :-/

Chicago

Chicago

Chicago

Chicago

New York City

New York City

New York City

Miami

Miami

More photos here.