Adventures in Vector Quantization

Ever since seeing Radio Dada by Rosa Menkman I’ve been forever trying to reproduce the style of compression/glitches it uses.

In my limited knowledge about the production of the video I do know what it uses compression artifacts found in the Cinepak codec. So, I set out to try and find a way of converting a video to a video that uses the Cinepak codec. If you’ve been following me you’ll that I’ve asked for help on many fora and mailing lists for help with initially little success.

Hidden somewhere in the documentation for MEncoder is a page detailing how to use Windows codecs on Linux for encoding. The copy of the Cinepak codec (iccvid.dll) that came with MEncoder/medibuntu was a bit broken so I had to use Google to download a new version.

Once I had that I used MEncoder to convert a video to an avi with the Cinepak codec. (I’m using mencoder version 2:1.0~svn33951~natty):

mencoder infile.avi -ovc vfw -xvfwopts codec=iccvid.dll -oac mp3lame -o outfile.avi

Unfortunately for me this did not produce the compression artifacts that I was after. I tried reencoding the video using the Cinepak codec several times but this only just made the video darker:

Cinepak encoding
(Original video)

Also, my attempt to encode the video using the Cinepak codec but with a low bitrate didn’t work as, at least when using MEncoder, the codec doesn’t have any encoding options. Drats! With that said, if anyone knows of a way of encoding using Cinepak with low/different bitrates on Linux using only freely available/open source software please do let me/the world know.

After this I felt very disheartened until I did a little bit of digging into the actual codec. I discovered that this codec is one of a few is based on Vector Quantization. I don’t know much about this but I felt that this must be the key. The video codecs that are based on Vector Quantization are Sorenson, Indeo and VQA.

I had no luck finding a way of converting to Sorenson and Indeo. However, I’ve had more luck with VQA. Wikipedia has a bit of information on the codec:

Vector Quantized Animation, known by its acronym VQA is a file format originally developed by Westwood Studios for video encoding in their game The Legend of Kyrandia and monopoly.

If you ever came across a Sega Saturn you probably will have come across videos encoded using VQA. As that Wikipedia article states, apart from the one used by Westwood Studios, only one VQA encoder exists. VQA Encoder v0.5 beta 2 by ugordan is the only known VQA encoder and luckily it works perfectly using Wine (I’m using version 1.2.3-0ubuntu1~ppa1) on Ubuntu 11.04. You’ll have to download some additional DLLs. Just do some research to find out which ones.

In order to use the software you need to convert your video to image files. I’ve had luck with converting the video to PCX files using FFMPEG:

ffmpeg -i infile.avi -sameq outfile_%03d.pcx

Then, in the VQA Encoder v0.5 beta 2 copy these options:

VQA encoder options

The program will automatically recognise that there are many images in the folder. After encoding has finished you should have a file called out_.vqa. In FFMPEG execute:

ffmpeg -i out_.vqa -sameq outfile.avi

You should now have a video that has similar compression to the Cinepak codec used with low bitrates:

VQA encoding
(Original video)

Brilliant! Well, not so brilliant. The problems with using this software are the following:

  • The software is no long being updated
  • Because of this it could stop working at any time and no support would be offered
  • It can only output video at 640×400, which you can see by the way it crops the video
  • It isn’t open source, though that only matters if you exclusively use open source software

So, is there any other way to achieve these compression artifacts, preferably using open source software?


Myself and Mez recently finished a script called Echobender that automatically databends images.

Click to view on GitHub

To use it you’ll need:

  • A computer with Linux installed. I don’t have a Windows or Mac PC so I can’t test it on those
  • Sox. On Ubuntu you can install it via sudo apt-get install sox
  • Convert, which is part of ImageMagick. On Ubuntu you can install it via sudo apt-get install imagemagick

Once you have those installed just execute ./ from the terminal and then drop a .jpg or .bmp file into it. The output will be in a folder called “echo”.

If you look closely at the script you can see a way to convert any data into an image! I’ll leave that one up to you… Here’s the source code for all those interested:

Thanks to Imbecil‘s MPegFucker script for much of the inspiration.

Changing Room at Eastside Projects

My part in the Changing Room exhibition ended on 6th December. For the benefit of those wishing to build in Second Life here’s my experiences working in such an environment.


I was feeling ill so didn’t do any building in world but I did discover the joys of working collaboratively using Skype.


I was working in Eastside Projects as usual so decided to look around to see what inspiration I could draw from it. Although there’s less barriers when creating work in Second Life I wanted to create something that worked with the space or at least reflected the current show in some way. I began by documenting recurring themes throughout the current exhibition. I noticed that there were a lot of vertical lines and this could also be seen in initial work in the virtual space.

My current body of work is gearing towards visual projections and manipulation of images and data/databending so I wanted to do something that reflected this. At the same time, I wanted to move away from the viewer simply looking at the work and more towards them experiencing and being immersed in it. To that extent I wanted to see what could be done with a VJ set, but within Second Life. There are already art exhibitions within Second Life, but I have found very few examples of performances, with the following being the best example:

Drawing inspiration from this video I wanted to see if I could take it further and create something more interactive.


I had a vague idea of what I was going to build. I wanted to create pod-like changing rooms that the view would climb into and then be treated to a visual experience.



I’m not that familiar with advanced functions of the building tools available in Second Life, so this was very much a day of trial and error. I discovered that maths plays a big part in plotting out shapes as there are, as far as I’m aware, no tools for snapping objects. Also, stacking objects sometimes proved difficult. One possible trick, which involved changing the object type to Physical occasionally resulted in my shapes falling through the ground!

Objects disappearing through the floor

Objects disappearing through the floor


Some people had been trying to get into the exhibition but soon discovered that their avatars were too big This seems to be a problem with the way Second Life allows you to specify the size of your avatar. The Eastside Projects in-world building is (apparently) built to scale. Looking at the size and shape of some avatars it’s not hard to see why the door size could be a problem!

As part of trying to link the two pieces together I also went to perform in Two Short Plays by Liam Gillick at Eastside Projects.

Two Short Plays (by hellocatfood)

My idea was to film the performances, modify them and then project them into these pods. In that way, climbing into them was like climbing into a changing room where they would transform themselves. There would also be animated objects in the changing rooms, which I could control either via a set of controls within the space or by modifying the script on the objects


I also wanted objects to bounce around in the environment, but that would require making them Physical objects, which had already caused problems. I also was finding that I had far too many unused shapes in the space and not enough time to find a use for them.

The build was otherwise progressing rather well.



This time I had trouble with video encoding. Using Ubuntu has it’s benefit but I definitely had trouble encoding this video into a suitable format for Second Life to be able to stream it. Originally I had intended to reproduce the ogg export glitch that I had discovered but, as I feared, this glitch has been fixed in a recent update to Kdenlive. In the end Openshot was able to render my movie into a suitable format, but didn’t have the desired effect

Stills (by hellocatfood)

Screenshots from the video

There were also difficulties in adding this video as a texture in Second Life. Within the space only Michael and Drew had the option to add media, but then I had to have access to the texture that would be used for the video. In the end I had to create a new blank texture (which required buying Linden Dollars) and then upload it. An easy fix but just not an ideal situation, especially with 20 mins before the deadline!


Although my work was not completely finished I do not think that was the aim of the Changing Room exhibition. I think this is something that will evolve and I would very much like to revisit this work and add to it and explore new areas.

View of finished work

View of finished work

View of finished work

View of finished work

In many ways this exhibition does mirror real life exhibitions. Although there are obvious complexities in learning how to use a computer or a new program, this is mirrored in real life when presented with any new tools with which to work with. My liaising with Michael to see what was possible to build is very similar to liaising with the gallery directors to see what is possible in the space.

Databending using Audacity

Thanks to some help on the Audacity forum I finally know out how to use Audacity to databend. Previously I’d been using mhWavEdit, which has its limitations and just doesn’t feel as familiar as Audacity. From talk on the various databending discussion boards I found that people would often use tools like Cool Edit/Adobe Audition for their bends. Being on Linux and restricting myself to things that run natively (i.e. not under Wine) presented a new challenge. Part of my task was to replicate the methods others have found but under Linux. My ongoing quest is to find things that only Linux can do, which I’m sure I’ll find when I eventually figure out how to pipe data through one program into another!

Here’s some of my current results using Audacity:

Gabe, Abbey, L and me (by hellocatfood)

Liverpool (by hellocatfood)

Just so you don’t have to go trawling through the posts on the Audacity forum here’s how it’s done. It’s worth noting that this was on using Audacity 1.3.12-2 on Linux. Versions on other operating systems may be different. Before I show you this it’s probably better if you work with an uncompressed image format, such as .bmp or .tif. As jpgs are compressed data there’s always more chance of completely breaking a picture, rather than bending it. So, open up GIMP/your faviourite image editor and convert it to an uncompressed format. I’ll be using this picture I took at a Telepaphe gig awhile back.

Next, download Audacity. You don’t need the lame plugin as we wont be exporting to mp3, though grab it if you plan to use it for that feature in the future. Once you have it open go to File > Import > Raw Data and choose your file. What you’ll now be presented is with options on how to import this raw data, which is where I would usually fall flat.

Import Raw Data

Import Raw Data

Under Encoding you’ll need to select either U-Law or A-Law (remember which one you choose). When you choose any other format you’ll be converting the data into that format. Whilst you want to achieve data modification this is bad because it’ll convert the header of the image file, thereby breaking the image. U/A-Law just imports the data. The other settings do have significance but I wont go into that here. When you’re ready press Import and you’ll see your image as data!

Image as sound

Image as sound

Press play if you dare, but I’d place money on the fact that it’ll probably sound like either white noise or Aphex Twin glitchy goodness. This is where the fun can begin. For this tutorial select everything from about five seconds into the audio. The reason for this is because, just like editing an image in a text editor, the header is at the beginning of the file. Unless you know the size of the header and exactly where it ends (which you can find out with a bit of research), you can usually guess that it’s about a few seconds into the audio. The best way to find it out is to try it out!

Anyway, highlight that section and then go to Effect > Echo

Apply the echo

Leave the default settings as they are and press OK

You’ll see that your audio has changed visually. It still wont sound any better but the magic happens when you export it back to an image file, which is the next step.

Once you’re happy with your modifications go to File > Export. Choose a new location for your image and type in the proposed new file name but don’t press save just yet. You’ll need to change the export settings to match the import settings.


Change the file format to Other Uncompressed Files and then click on the Options button.

Export settings

Export settings

Change the settings to match the ones above (or to A-Law if you imported as A-Law). With that now all set you can now press Save! If you entered a file extension when you were choosing a file name you’ll get a warning about the file extension being incorrect, but you can ignore it and press Yes. If you didn’t choose a file extension, when the file is finished exporting, add the appropriate extension to the file. In my case I’d be adding .bmp to the end.

Here’s the finished image:



There’s of course so many different filters available in Audacity, so try each of them out! If you’re feeling really adventurous try importing two or more different images and then exporting them as a single image.

Ubuntu Bug Jam

Ubuntu Bug Jam

From Friday 2nd to Sunday many Ubuntu, Linux and Open Source enthusiasts descended upon the Linux Emporium to take part in the Ubuntu Bug Jam. In the words of an Ubuntu blogger, the Ubuntu Bug Jam is:

…a world-wide online and face-to-face event to get people together to fix Ubuntu bugs – we want to get as many people online fixing bugs, having a great time doing so, and putting their brick in the wall for free software. This is not only a great opportunity to really help Ubuntu, but to also get together with other Ubuntu fans to make a difference together, either via your LoCo team, your LUG, other free software group, or just getting people together in your house/apartment to fix bugs and have a great time.

This is the second time I’ve been to a bug jam. The first time I went I hadn’t even used Ubuntu, so only managed to report one bug and otherwise mostly focused on reporting stuff in Inkscape as I use it more often.

This time was a similar affair. Apart from testing out the beta of the next release of Ubuntu (the Karmic Koala) and asking for help in fixing bugs in my own system I mostly spent time testing bugs in Inkscape and suggesting features for future releases of Ubuntu.

Overall, I think reporting any bug in any package or program helps everyone and one thing I really like about open source is its transparency and honesty in its errors. That is, it’s not ashamed to admit that there are a few bugs here and there.

Open Source software in design

Seems like I’ve started a rather interesting discussion over at the Computer Arts Forum about the use of Open Source software in design.

I think that the general consensus is that Open Source software apps such as Inkscape, GIMP and Blender will never replace their industry standard counterparts because there’s nothing wrong with these products in the first place. FOSS packages such as Open Office and Firefox (and to a lesser extent Ubuntu) have only really gained popularity because their counterparts are kinda rubbish. Neither Microsoft Office or Internet Exploder are as standards compliant as their FOSS counterparts and, in relation to Microsoft Office, you can save a lot of money by using Open Office that, whilst it has its flaws, offers very similar functionality to Microsoft’s product at zero percent of the cost! Brilliant!

When I start planning workshops soon, I’m still going to plan them assuming that they don’t have the necessary software (not all schools have Photoshop-like software) so will offer the use of FOSS packages. I think education is where Open Source will find its place in terms of design. What do you all think?