Sonification Studies/Pixel Waves

In October 2013, whilst visiting Barcelona for the performance at Mutuo Centro De Arte for The Wrong, I had the honour of meeting Neil Harbisson and Moon Ribas, collectively known as The Cyborg Foundation.

Throughout the evening we began to see some similarities in how we interpret things we see and hear, as well as how we handle data. For example, in Waiting for Earthquakes, Moon Ribas uses a device attached to her wrists to receive live data of earthquakes around the world. This data is then interpreted in a live dance performance. Neil Harbisson was born with Achromatopsia, a condition that only allowed him to see in greyscale. In order to see, or at least perceive colour, he had an “Eyeborg” implanted, which translates colour frequencies into sound frequencies.

Similarly, some of the work by myself and other digital and new media artists looks at different ways of interpreting data, beyond simple charts and data visualisations. For example, my piece(s) Some of My Favourite Songs from 2012 took the data of an mp3 file and interpreted it as jpg image data.

The Octopus Project - Catalog

So, whilst their work is more advanced and in some cases takes a more performative approach, there are no doubt some similarities.

Later in the evening Neil very kindly did an audio portrait of my face. It’s really intriguing to hear what he sees (there was once a time when that sentence would’ve made no sense). Have a listen:


Kinda murky. Must be the hair.

Being the unique person that I am it was not enough for me to simply hear what I look like. I needed to see what he hears that I look like. Using a modified version of the software that I used to make Some of My Favourite Songs (update available soon) I converted this audio portrait back into an image:

Audio Portrait

Yo Dawg!

Once again not being completely content on just seeing what he hears of what he sees (this is getting confusing) I started to explore methods to again reinterpret this as audio. Much has already been written about sonification, especially in the Sonification Handbook, which I highly recommend you read (I haven’t read all of it yet). For my experiment I wanted to use the red, green and blue (RGB) values of each pixel as a source for the audio. I assume that Neil Harbisson’s Eyeborg does not use the same method as, depending on the resolution of the image, it would take a totes cray cray amount of time to read each pixel, and then it would have to do this 24 times a second (or whatever frame rate he uses). However, my intention is not to reproduce, just reinterpret.

Using Pure Data, my weapon program of choice, I created a simple program, available from my Github account, that would read the RGB values of a pixel from an image from left to right, bottom to top (order define by how [pix_data] works) at a user-defined rate. These values are then used to power three oscillators/[osc~] objects – one for each colour. The RGB values are output from 0-1, so scaling is necessary to get something audible. This scaling can also be set by a user globally or per oscillator. Enough yapping, here’s the software:

audioiamge

Red/Green/Blue denotes the value (0-1) of the current pixel. Current column/line relates to the X/Y position of the pixel currently being read. Scale is how much the pixel value is being scaled by before it reaches the oscillator. For Speed it is reading one pixel every n milliseconds. This speed, however, may become irrelevant once you get to 0.1 milliseconds. By that point it’s going as fast as the computer and program will allow it to go.

With the software in place I began to look at which images produced the “best” sounds when sonified. Of course this is all subjective, but I was after sounds that didn’t sound completely unlistenable and perhaps had some kind of pattern or rhythm. Sonifying the reinterpreted image of my audio portrait produced these results. I added some text information for illustration purposes:

The results are somewhat noisy and unlistenable. Other images from the Some of My Favourite Songs series produced very similar audio results. I soon learnt that the garbage in garbage out rule applied to this process as well. If the image being read is too noisy i.e. if there is a lot of colour variation in closely located pixels, when sonified the resultant audio will be noisy. If I wanted audio with pattens I’d have to use images with visual patterns.

For the next stage in this experiment I turned to images by Joe Newlin. He remixed the original software that I used to make the Some of My Favourite songs into a JPG Synth images. His modification allowed him to send oscillators into the image generator, resulting in some awesome patterns! Go download it now now now! Using one of his example images I was given these results:

Much “better”, amirite? Here’s another experiemnt using one of the CóRM images:

So far, however, I had only dealt with digitally composed images. What do regular digital photographs sound like? Durnig the merriment of the evening at the Cyborg Foundation they invited me to have my “regular” portrait taken in the self-procliamed worlds smallest photography studio! Although only greyscale it produces some quite pleasing results.

For the final stage of this experiment I used colour photographs:

Although these images are visually complex, they produced “successful” results because the colours do not vary greatly.

It’s Just Noise!

Long-time readers will recall that I experimented with sonification back in 2010. In that experiment I tried composing images with the intention of them sonified, with some success:

Now that I have developed a more robust approach and have better understanding of programming in general you can expect me to take this much farther. I predict the development of an image to audio sequencer in the near future.