Late at Tate Britain: Echoes

A night exploring myths and morality, inspired by Margerite Humeau’s Echoes

Late at Tate Britain April explores the Ancient Egyptians’s relationship with mortality and its parallels with contemporary society, inspired by Margeritue Humeau. Ancient Egyptians looked at preserving life trough spirituality. In current times, do we try to achieve this through digital formats.

The series will kick off on 6 April with an evening exploring the current Art Now installation by Marguerite Humeau. The work entitled Echoes is conceived as a confrontation between life and death, with the gallery transformed into part temple, part laboratory for the industrial production of an elixir for eternal life.

glitChicago

glitChicago presents the work of 24 artists working with glitch in a wide variety of media. All have participated in the city’s glitch art scene, though they may come from other cities and indeed other countries. The two-month long exhibition features wall installations by Melissa Barron, jonCates, Theodore Darst, A. Bill Miller, Jon Satrom, Lisa Slodki, and Paul Hertz and free-standing installations by Alfredo Salazar-Caro, Curt Cloninger, James Connolly and Kyle Evans, and Channel TWo.

On Friday, September 19 an evening of media performances will include work by A. Bill Miller, Antonio Roberts, James Connolly and Kyle Evans, Jason Soliday, Jeff Kolar, Joseph Y0lk Chiocchi, PoxParty (Jon Satrom and Ben Syverson), Nick Briz, I ♥ Presets (Rob Ray, Jason Soliday, Jon Satrom), Curt Cloninger, Nick Kegeyan, Shawné Michelain Holloway, jonCates, and stAllio!.

The following day, Saturday, September 20, UIMA will host a round table discussion looking at glitch art from an art historical perspective, asking the question: Once we induct glitch art into art history, is glitch art dead?

Maker Monday, 29th February

On 29th February I’ll be leading the workshop at Maker Monday at Birmingham Open Media.

makermondayglitch

Glitch Art describes the process of misusing and reappropriating
hardware and software to make visual art. In this part theory, part practice workshop we will consider ways in commonly used programs can be misused to interpret various data in different ways.

For the practical part of the workshop you’ll need a laptop/computer (Mac/Windows/Linux) with the following software installed:

Tickets for the event are free, so get in on it now! There’ll be pizza as well 🙂

TRANSFORMERS: A Code and Data-Driven Animation Screening, 6th February

On 6th February I’ll be part of the TRANSFORMERS screening happening at the College Art Association Conference in Washington DC.

sonification_caa

Computer programming is an often invisible force that affects many aspects of our contemporary lives. From how we gather our news, maintain our libraries, or navigate our built environment, code shapes the interfaces and information they connect to. Artists who work with these languages as material can critically excavate code and its effects. The works included in this screening include animation and video that are produced through the use and manipulation of code and/or data.

The selected works will be screened during CAA on Saturday, February 6th from 9:00am- 10:30am in the Media Lounge and is simultaneously available online through the New Media Caucus Vimeo Channel.

The screening is organised by Darren Douglas Floyd, Artist/Filmmaker, Mat Rappaport, Artist, Columbia College Chicago, and A. Bill Miller, Artist, University of Wisconsin, Whitewater. My contribution is a shorter live performanec of the Sonification Studies performance I did at glitChicago in 2014. I’ll update this post with the new video once the event is over. Video below:

Sonification Studies

On 19th September I did a Sonification Studies performance at glitChicago. I had previously done a demonstration/performance of this at Relearn in Brussels, Post-Modern Plant Life in Leamington Spa, and at September’s Digbeth First Friday at Vivid Projects, but the glitChicago was what I consider to be the first proper performance. A culmination of my reserach, if you will.

The focus for me at this performance was to get a sense of rhythm. Previous performances experimented with using found image sources, which resulted in a somewhat chaotic performance. For the glitChicago performance I composed a series of images with repeating patterns.

pixelplayergrid

I separated these images into four groups, one for each channel. There were no rules strict for what went into each group, but I thought about which would sound better as beats, which have similar colours etc. A Bill Miller shot a bit of my performance, but bow that I’m back in the UK with all my equipment (actually just a midi controller) I present to you a remake of my performance. Warning: it’s loud and there be flashing images.

I’d be really interested to see and hear what others make with the software, or they take the code and extend it somehow.

Pixel Player

Back in June 2014 I wrote how that in 2013, after visiting The Cyborg Foundation in Barcelona, I became interested in exploring sonification. My experients at that stage culminated in the production of the Pixel Waves Pure Data patch, which allows the sonification of images based on the colour/RGB values of individual ppixels.

I spent the following months building and refining an update to the Pixel Waves software, with a focus on allowing multiple images to be played simultaneously. In a way, I wanted to create a sequencer but for images. After many months I’m happy to formally announce the release of the Pixel Player.

pixelplayer

This software operates in a similar way to Pixel Waves, but with a focus on playing multiple images simultaneiously. Instructions on getting started:

  • Create the GEM window
  • Click on the red button to load an image. Supported file types depend on your operating system, but generally jpg, gif and png file formats are supported
  • Click on the green start button and the pixels will start to be read
  • Drag the orange horizontal slider up to increase the master volume
  • Drag the orange vertical slider up on each pixel player to control its volume
  • Turn the knob to scale the pitch of the audio

The currently displayed/sonified pixel for each channel will be synchronised from the first channel. For this reason it is recommended that all of the input images used are the same dimensions.

This may sound like a lot to do but it becomes easy after a few attempts. To make things easier the loadimage.pd patch has inlets that you can use to control each channel with a midi controller, keyboard, or any other device. To expose the inlets increase the canvas size of the patch by around 10 pixels.

The software includes a video display output, which shows the current pixel colour. This can also be shown on the patch window by clicking the red display button. Flashing lights might not be to everyone’s taste, so this can be turned off. Due to this patch relying on [pix_data], the GEM window needs to be created, even if the pixel display isn’t used.

Enough yapping, what does it actually sound like?! Here’s a small demo, made using a combination of 40×20 images made in Inkscape and images modified using the Combine script by James Allen Munsch (made for Archive Remix. Remember that project?).

Please do give the patch a try and let me know what you think!

Sonification Studies/Pixel Waves

In October 2013, whilst visiting Barcelona for the performance at Mutuo Centro De Arte for The Wrong, I had the honour of meeting Neil Harbisson and Moon Ribas, collectively known as The Cyborg Foundation.

Throughout the evening we began to see some similarities in how we interpret things we see and hear, as well as how we handle data. For example, in Waiting for Earthquakes, Moon Ribas uses a device attached to her wrists to receive live data of earthquakes around the world. This data is then interpreted in a live dance performance. Neil Harbisson was born with Achromatopsia, a condition that only allowed him to see in greyscale. In order to see, or at least perceive colour, he had an “Eyeborg” implanted, which translates colour frequencies into sound frequencies.

Similarly, some of the work by myself and other digital and new media artists looks at different ways of interpreting data, beyond simple charts and data visualisations. For example, my piece(s) Some of My Favourite Songs from 2012 took the data of an mp3 file and interpreted it as jpg image data.

The Octopus Project - Catalog

So, whilst their work is more advanced and in some cases takes a more performative approach, there are no doubt some similarities.

Later in the evening Neil very kindly did an audio portrait of my face. It’s really intriguing to hear what he sees (there was once a time when that sentence would’ve made no sense). Have a listen:


Kinda murky. Must be the hair.

Being the unique person that I am it was not enough for me to simply hear what I look like. I needed to see what he hears that I look like. Using a modified version of the software that I used to make Some of My Favourite Songs (update available soon) I converted this audio portrait back into an image:

Audio Portrait

Yo Dawg!

Once again not being completely content on just seeing what he hears of what he sees (this is getting confusing) I started to explore methods to again reinterpret this as audio. Much has already been written about sonification, especially in the Sonification Handbook, which I highly recommend you read (I haven’t read all of it yet). For my experiment I wanted to use the red, green and blue (RGB) values of each pixel as a source for the audio. I assume that Neil Harbisson’s Eyeborg does not use the same method as, depending on the resolution of the image, it would take a totes cray cray amount of time to read each pixel, and then it would have to do this 24 times a second (or whatever frame rate he uses). However, my intention is not to reproduce, just reinterpret.

Using Pure Data, my weapon program of choice, I created a simple program, available from my Github account, that would read the RGB values of a pixel from an image from left to right, bottom to top (order define by how [pix_data] works) at a user-defined rate. These values are then used to power three oscillators/[osc~] objects – one for each colour. The RGB values are output from 0-1, so scaling is necessary to get something audible. This scaling can also be set by a user globally or per oscillator. Enough yapping, here’s the software:

audioiamge

Red/Green/Blue denotes the value (0-1) of the current pixel. Current column/line relates to the X/Y position of the pixel currently being read. Scale is how much the pixel value is being scaled by before it reaches the oscillator. For Speed it is reading one pixel every n milliseconds. This speed, however, may become irrelevant once you get to 0.1 milliseconds. By that point it’s going as fast as the computer and program will allow it to go.

With the software in place I began to look at which images produced the “best” sounds when sonified. Of course this is all subjective, but I was after sounds that didn’t sound completely unlistenable and perhaps had some kind of pattern or rhythm. Sonifying the reinterpreted image of my audio portrait produced these results. I added some text information for illustration purposes:

The results are somewhat noisy and unlistenable. Other images from the Some of My Favourite Songs series produced very similar audio results. I soon learnt that the garbage in garbage out rule applied to this process as well. If the image being read is too noisy i.e. if there is a lot of colour variation in closely located pixels, when sonified the resultant audio will be noisy. If I wanted audio with pattens I’d have to use images with visual patterns.

For the next stage in this experiment I turned to images by Joe Newlin. He remixed the original software that I used to make the Some of My Favourite songs into a JPG Synth images. His modification allowed him to send oscillators into the image generator, resulting in some awesome patterns! Go download it now now now! Using one of his example images I was given these results:

Much “better”, amirite? Here’s another experiemnt using one of the CóRM images:

So far, however, I had only dealt with digitally composed images. What do regular digital photographs sound like? Durnig the merriment of the evening at the Cyborg Foundation they invited me to have my “regular” portrait taken in the self-procliamed worlds smallest photography studio! Although only greyscale it produces some quite pleasing results.

For the final stage of this experiment I used colour photographs:

Although these images are visually complex, they produced “successful” results because the colours do not vary greatly.

It’s Just Noise!

Long-time readers will recall that I experimented with sonification back in 2010. In that experiment I tried composing images with the intention of them sonified, with some success:

Now that I have developed a more robust approach and have better understanding of programming in general you can expect me to take this much farther. I predict the development of an image to audio sequencer in the near future.

It’s Just Noise!

As readers of my blog will know by now you can easily import any data into audacity and play it as audio. However, most data that you’ll import and then play will just turn out as noise. That is simply because there’s too much noise in the image i.e. too many colours and too much data. So, if you reduce the amount of colours and data in theory you get something a little bit more pleasing to the ear. Experimentationing time!

These were my initial two shapes that I worked with:

And the resulting sound, when looped a lil’ bit:

It’s not much but it shows definite potential. I’ve yet to work out how colour affects the sound, but I’m getting there

The next logical step is to of course run a glitched image through this process! I worked with this image for the sound then used ImageMagick to crop the image to 30x30px squares and used FFmpeg to arrange them into a video.

It’s noise, but I like it 🙂