Signal and Noise is the second in an annual series of interdisciplinary exhibitions and talks traversing the fuzzy boundaries between science and art. Led by the University of Westminster’s Transmedia Cluster, this year’s exhibition showcases work from students and staff from across the Westminster School of Media, Arts and Design.
The title of the exhibition provides a thematic focus for the participating artists, who have explored it from a multitude of different angles and disciplinary perspectives. From a computer program that produces large-scale drawings by continuously reassembling the lines of an IKEA instruction manual to scientific methods to differentiate between the stars and the noise in a photograph of the dark sky, the exhibition brings together a collection of engaging, surprising and thought-provoking work. Artworks that alter and translate conventionally intelligible messages or concepts into data that make sense only to digital or analogue devices will enhance, or blur, the distinction between Signal and Noise. The work challenges us to consider the ways in which we selectively process the constant flow of data and information from the world around us.
The aim of the series is to promote a culture of collaboration across disciplines of art, science and technology, to create a new generation of practitioners who identify ideas through the blending of mediums and practices. A truly collective enterprise, the exhibition has also been planned, facilitated and curated by a cross-disciplinary team of students and staff.
As readers of my blog will know by now you can easily import any data into audacity and play it as audio. However, most data that you’ll import and then play will just turn out as noise. That is simply because there’s too much noise in the image i.e. too many colours and too much data. So, if you reduce the amount of colours and data in theory you get something a little bit more pleasing to the ear. Experimentationing time!
These were my initial two shapes that I worked with:
And the resulting sound, when looped a lil’ bit:
It’s not much but it shows definite potential. I’ve yet to work out how colour affects the sound, but I’m getting there
The next logical step is to of course run a glitched image through this process! I worked with this image for the sound then used ImageMagick to crop the image to 30x30px squares and used FFmpeg to arrange them into a video.
On Thursday 4th February I was Stoke-on-Trent for BitJam. I still don’t have anything ready to show on stage but thought I’d use the night as a testing ground for some of my ideas. I wanted to investigate ways in which to interpret what was happening around me. The main performance of the night was from a chap called Arctic Sunrise
For my first test I fired up Alchemy and attempted to draw the music. Alchemy fortunately has a few tools that can make your sketches react to sounds. They are Create > Mic Shapes and Affect > Mic Expand. Here’s the result of using both of them together
And a nice little animation of those done using GIMP and Openshot.
The next method was to use the Echobender script on a webcam pointing at the stage. Obvious errors in the sound recording actually kinda complimented the video. However, I’m a lil bit disappointed by the speed of the script at the moment. I may investigate doing something similar in Processing.
The final method involved a bit of post-processing. I made a short compilation of clips I shot at BitJam and then opened the video in a text editor and replaced loads of text with other text. The output was then reencoded using Avidemux
So, there you have it! Now to figure out how I can turn this into some sort of performance