Motion Interpolation for Glitch Aesthetics using FFmpeg part 2

Below are a few examples of how you can use FFmpeg’s minterpolate to create artworks with a glitch aesthetic.

You can read about how I used it for an artwork in this blog post. You can also grab the source file for these videos here. Give it a try yourself!

mc_mode=obmc:me_mode=bidir:me=fss

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=fss'" 005_mc_mode=obmc_me_mode=bidir_me=fss.mp4

mc_mode=obmc:me_mode=bidir:me=ds

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=ds'" 006_mc_mode=obmc_me_mode=bidir_me=ds.mp4

mc_mode=obmc:me_mode=bidir:me=hexbs

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=hexbs'" 007_mc_mode=obmc_me_mode=bidir_me=hexbs.mp4

mc_mode=obmc:me_mode=bidir:me=epzs

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=epzs'" 008_mc_mode=obmc_me_mode=bidir_me=epzs.mp4

This blog post is part of a series. Click the links below to see more examples of FFmpeg’s motion interpolation:

Motion Interpolation for Glitch Aesthetics using FFmpeg part 1

Below are a few examples of how you can use FFmpeg’s minterpolate to create artworks with a glitch aesthetic.

You can read about how I used it for an artwork in this blog post. You can also grab the source file for these videos here. Give it a try yourself!

mc_mode=obmc:me_mode=bidir:me=esa

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=esa'" 001_mc_mode=obmc_me_mode=bidir_me=esa.mp4

mc_mode=obmc:me_mode=bidir:me=tss

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=tss'" 002_mc_mode=obmc_me_mode=bidir_me=tss.mp4

mc_mode=obmc:me_mode=bidir:me=tdls

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=tdls'"003_mc_mode=obmc_me_mode=bidir_me=tdls.mp4

mc_mode=obmc:me_mode=bidir:me=ntss

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir:me=ntss'"004_mc_mode=obmc_me_mode=bidir_me=ntss.mp4

This blog post is part of a series. Click the links below to see more examples of FFmpeg’s motion interpolation:

Motion Interpolation for Glitch Aesthetics using FFmpeg part 0

As you may have seen in this blog post I made use of FFmpeg’s minterpolate motion interpolation options to make all of the faces morph. There’s quite a few options for minterpolate and many different combinations of options that can be used. i had to consult Wikipedia to figure out exactly what the different motion estimation algorithms were but even with that information I couldn’t visualise how it would change the output. To add to this how I’m using minterpolate isn’t a typical use case.

To make things easier for those wishing to use FFmpeg’s minterpolate to create glitch aesthetics I have compiled 36 videos each showing a different combination of processing options. The source video can be seen below and features two of my favourite things: cats (obtained from here) and rainbows.

I’ve slowed it down so that you can see exactly what’s in the video, but the original can be downloaded here.

The base script used for each video is:

ffmpeg -i cat_rainbow_original.mp4 -filter:v "setpts=62.5*PTS,minterpolate='fps=25:mb_size=16:search_param=400:vsbmc=0:scd=none:

In part two of March’s Development Update I explained why I set scd to none and search_param to 400. I could have of course documented what happens when all of the processing options are changed but that would result in me having to make hundreds of videos! The options that were changed were the mc_mode (motion compensation mode), me_mode (motion estimation mode), and me (motion estimation algorithm).

Test conditions

These videos were created using FFmpeg 7:4.1.4-1build2, installed from the Ubuntu repositories, on a Dell XPS 15 (2017 edition) with 16GB memory, a i7 processor and an Nvidia GeForce GTK 1050 graphics card, all running on Ubuntu 19.10 using proprietary drivers.

I don’t have a Windows or Mac machine, and haven’t used other Linux distributions so can’t test these scripts in those conditions. If there’s any problems with getting FFmpeg on your machine it’s best that you contact the developers for assistance.

Observations

My first observation is that the esa me_mode takes frikkin ages to complete! Each video using this me_mode took about four hours to process. I did consider killing the script but for completeness I let it run.

Using bilat me_mode produces the most chaotic results by far. Just compare 026_mc_mode=obmc_me_mode=bilat_me=epzs.mp4 to 008_mc_mode=obmc_me_mode=bidir_me=epzs.mp4 and you’ll see what I mean.

For a video of this length nearly all of the scripts (except for those using esa) took between 30 seconds and 1 minute to complete, and that’s on machines with and without a GPU. This is good news if you don’t want to have to carry around a powerhouse laptop all the time.

All of this reminds me a bit of datamoshing. It’s more predictable and controllable, but the noise and melty movement it creates, especially some of the ones using bilat me_mode, remind me of the bloom effect in datamoshing. This could be down to the source material, and I’d be interested to see experiments involving datamoshed videos.

Let’s a go!

With that all said let’s jump into sharing the results. As there’s 36 videos I’ll be splitting it over nine blog posts over nine days, with the last being posted on 28th March 2020. Each will contain the script I used as well as the output video. Links to each part can be found below:

(mis)Using FFmpeg’s Motion Interpolation Options

Towards the end of the Let’s Never Meet video the robotic faces slowly morph into something a little bit more human-like.

These faces continue to morph between lots of different faces, suggesting that when getting to know people you can never really settle on who they are. To make the faces morph I used motion interpolation to morph between each face. Here’s what Wikipedia has to say about motion interpolation.

Motion interpolation or motion-compensated frame interpolation (MCFI) is a form of video processing in which intermediate animation frames are generated between existing ones by means of interpolation, in an attempt to make animation more fluid, to compensate for display motion blur, and for fake slow motion effects.

For those that use proprietary software there’s a few that can do this, including Twixtor and After Effects.

If, like me, you only use open source software there are a few options but they’re not integrated within a general post processing or video editing GUI.

slowmoVideo

slowmoVideo is an open source application which allows you to vary the speed of a video clip over time. I used this previously for the background images in the Visually Similar Artwork.

For Let’s Never Meet I did consider using slomoVideo again. What I like about it is being able to vary the speed and that it has a GUI. However, development on it seems kinda slow and, most importantly, it requires a GPU. Occasionally I find myself working on a machine that only has integrated graphics (i.e. no GPU), which makes using slomoVideo practically impractical. So, I needed something that would reliably work on a CPU and produced similar if not same visual results as slomoVideo.

Butterflow

Butterflow is another software for motion interpolation. It doesn’t have a native GUI but it has a nice set of command line options. Sadly it seems impossible to install on Linux. Many have tried, many have failed.

FFmpeg

Finally I tried FFmpeg. Pretty much all my artworks use FFmpeg at some point, whether as the final stage in compiling a Blender render or as the backend to a video editor or video converter. I’m already very familiar with how FFmpeg works and feel it can be relied to work an be developed in the future.

I actually first came across FFmpeg’s motion interpolation options sometime in late 2018, but only really cemented my understanding of how to use it in making Let’s Never Meet.

Going through FFmpeg’s minterpolate options was quite daunting at first. There’s lots of options which have descriptions on how they work but I didn’t really understand what results they would produce. Nonetheless I mixed and matched settings until I produced something close to my liking.

The first step in making the morphed video was making original speed video.

I’ve slowed the above video down so you can see each frame, but if you want the original video you can download it here. This consisted 47 faces/images, played one image per frame. In total it lasted 1.88 seconds and I needed to slow it down to at least x minutes, which is the length of the video.

Here is the code that I used

ffmpeg -i lnm_faces_original.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=400'" -y output.mp4

I’ll explain three of the important parts of this code.

setpts

The FFmpeg wiki has a good explanation of what setpts does:

To double the speed of the video, you can use:

ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv

The filter works by changing the presentation timestamp (PTS) of each video frame. For example, if there are two successive frames shown at timestamps 1 and 2, and you want to speed up the video, those timestamps need to become 0.5 and 1, respectively. Thus, we have to multiply them by 0.5.

So, by using setpts=40*PTS I’m essentially slowing the video down by a factor of 40. For this video I took a guess at much I’d need to multiply the video of the faces to make it match the length of the video. If I wanted to be exact I’d need to use some maths and divide the frame count of the video (5268), divide it by the frame count of the face video (47) and use the output (112.085106383) as the PTS multiplier.

scd

scd is probably the most important part of this code. It attempts to detect if there’s any scene changes and then not perform any motion interpolation on those frames. In this scenario, however, I want to interpolate between every frame, regardless of whether they appear to be part of the same “scene”. If you leave scd at the default of fdiff and scd_threshold at 5.0 ffmpeg tries to decide if there’s enough difference between frames. Here’s what that would’ve looked like:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:me_mode=bidir:vsbmc=1:search_param=400'" -y lnm_faces_scd.mp4
(without setting scd the defaults are assumed)

Not ideal, so I disabled it by setting it to none.

search_param

This one I don’t quite understand but I understand how it affects the video. If I were to leave the setting with the default value of 32 then you can see that when it interpolates there isn’t much movement:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=32'" -y search_param_32.mp4

With the value of 400 which I used:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=400'" -y search_param_400.mp4

And with the slightly ridiculous value of 2000:


ffmpeg -i faces.mp4 -filter:v "setpts=40*PTS,minterpolate='fps=25:scd=none:me_mode=bidir:vsbmc=1:search_param=2000'" -y search_param_2000.mp4

The biggest difference is clearly between setting the search_param from 32 to 400. At 2000 there’s only minor differences, though this may change depending on your source input.

It’s morphin’ time!

With all the settings of minteroplate now set I created the final video:


(I reduced the quality of the video a little bit to save on bandwidth)

I quite like the end results. It doesn’t look the same as the output of slowmoVideo in that it the morphing happens in blocks and doesn’t look like the dust grains output of slomoVideo. However, in using FFmpeg I can now use a familiar program that works on the CPU, even if it does take a long time!

Producing audio for Let’s Never Meet

For the majority of my career in art I’ve been primarily known for my visual artwork. I’ve dabbled in making noises with my Sonification Studies performances (which may make a comeback at some point) but it’s only since my 2018 performance at databit.me that I’ve regularly made and performed music.

On the performance side I’ve mostly used TidalCycles. You may have seen that I have been doing live streams of my rehearsals.

Outside of live coding I’ve spent most of my time getting to grips with software-based synthesisers and DAWs. When asking for advice on this most people told me to use software like Ableton. What these well-meaning people may not realise is that I exclusively use (Ubuntu) Linux and only open source software. This gives me the freedom that open source grants me but boy does it sometimes cause headaches! Plenty of people use the open source options available to them but this approach is still the road less travelld and so I’ve found myself sometimes asking lots of questions and either not getting a response or getting the response that what I’m trying to achieve is not possible.

And so for the last year or so that I’ve creating workflows that work for me. For this I’ve been using Ardour, which is a pretty good cross-platform DAW. So far I’ve produced soundtracks to two of my artworks, We Are Your Friends and Let’s Never Meet. In this Development Update I’ll go over a little trick I learnt whilst making the soundtrack for Let’s Never Meet.

In short, Let’s Never Meet is about meeting people over the internet. The soundtrack is actually a remix of a an Android alarm ring tone.

It’s not an alarm tone that I use myself but it was ambient enough to work in an outdoor setting for an extended period without getting annoying. Plus using a sample from my phone just somehow felt appropriate, if you know what I mean. After many many many hours of producing my remix sounded a bit like this:

I was really happy with the results but it felt like there was something missing. It was pretty samey throughout and I think there needed to be some kind of buildup or change in pace. To address this I decide to add some percussion. I turned to the glitch sample set that is downloaded when you install TidalCycles. It has a nice percussive quality and definitely sound glitchy and electronic, again in fitting with the digital theme of the piece.

As far as playing these samples I did consider manipulating them in TidalCycles and importing the whole recording file into Ardour, but I also wanted to get better with Ardour so sought a solution within that software. The glitch pack contains eight samples and I needed to be able to load them into Ardour to trigger/play at will. The drumkv1 plugin is the perfect solution to this.

It’s a sampler where you assign samples to midi notes. To play the notes you could use a midi keyboard, send the notes from Pure Data, or basically any software that can send midi. I decided to use the x42 step sequencer to input the midi notes. It’s a very simple step sequencer originally built for the MOD platform but, because it’s an lv2 plugin, it can run in any host that supports it.

Using this sequencer I could easily create an eight-step loop that starts simple builds up with more drums over time.

With the samples assigned to midi notes I just needed a way to press the pads in the step sequencer. I have two physical controllers, a Launchpad X and a MPK Mini. The latter only has two rows of four drum pads. The former is an 8×8 grid but I can’t yet program it properly to work with the software I use (more on that another time). In any case, in looking into how to use the Launchpad X with x42 the plugin’s author, Robin Gareus, told me that it’d never be possible because x42 doesn’t accept midi input 🙁

I accepted that using a software or hardware midi controller was a no go. I would have to use a mouse, which wasn’t ideal but it would work. The plugin’s author did recommend that I look into BSequencer. It appears to accept midi input but with a deadline looming I didn’t want to spend more time on this by learning yet another software.

Using my mouse in Ardour I started to record the input of me playing the step sequence but I noticed the midi notes from x42 weren’t being recorded.

I found this very strange. drumkv1 was blinking to show it was receiving midi but nothing was being recorded. After some research I discovered that it was because Ardour records external midi. When I loaded x42 as a plugin within Ardour it was sending midi internally. To get around this there are two solutions:

I used Carla as a plugin host to load x42 and then sent the midi output to the correct track in Ardour.

Carla showing x42 being connected to Ardour

This worked but I was getting a lot of latency with the input and the notes didn’t align properly. This is probably easy to solve by tuning my system to reduce latency (I already use the realtime kernel), or maybe something that I was doing wrong, but again with a looming deadline I didn’t want to do anything drastic and time consuming.

The second option was to send the output of x42 out into another application and then have that external application send its midi input into Ardour. To do this I loaded a2jmidid, connected the track’s midi output into it, and then connected the output of a2jmidid into the track in Ardour.

Screenshot showing ardour connecting to a2jmidid and back again

When I started up x42 again in Ardour and started clicking on its pads it all worked as expected!

After all of that effort I recorded myself building up the percussion. Here’s the finished track 🙂

I’ve been having a lot of fun making music, so expect more of it from me in the future.

Adventures in Vector Quantization

Ever since seeing Radio Dada by Rosa Menkman I’ve been forever trying to reproduce the style of compression/glitches it uses.

In my limited knowledge about the production of the video I do know what it uses compression artifacts found in the Cinepak codec. So, I set out to try and find a way of converting a video to a video that uses the Cinepak codec. If you’ve been following me you’ll that I’ve asked for help on many fora and mailing lists for help with initially little success.

Hidden somewhere in the documentation for MEncoder is a page detailing how to use Windows codecs on Linux for encoding. The copy of the Cinepak codec (iccvid.dll) that came with MEncoder/medibuntu was a bit broken so I had to use Google to download a new version.

Once I had that I used MEncoder to convert a video to an avi with the Cinepak codec. (I’m using mencoder version 2:1.0~svn33951~natty):

mencoder infile.avi -ovc vfw -xvfwopts codec=iccvid.dll -oac mp3lame -o outfile.avi

Unfortunately for me this did not produce the compression artifacts that I was after. I tried reencoding the video using the Cinepak codec several times but this only just made the video darker:

Cinepak encoding
(Original video)

Also, my attempt to encode the video using the Cinepak codec but with a low bitrate didn’t work as, at least when using MEncoder, the codec doesn’t have any encoding options. Drats! With that said, if anyone knows of a way of encoding using Cinepak with low/different bitrates on Linux using only freely available/open source software please do let me/the world know.

After this I felt very disheartened until I did a little bit of digging into the actual codec. I discovered that this codec is one of a few is based on Vector Quantization. I don’t know much about this but I felt that this must be the key. The video codecs that are based on Vector Quantization are Sorenson, Indeo and VQA.

I had no luck finding a way of converting to Sorenson and Indeo. However, I’ve had more luck with VQA. Wikipedia has a bit of information on the codec:

Vector Quantized Animation, known by its acronym VQA is a file format originally developed by Westwood Studios for video encoding in their game The Legend of Kyrandia and monopoly.

If you ever came across a Sega Saturn you probably will have come across videos encoded using VQA. As that Wikipedia article states, apart from the one used by Westwood Studios, only one VQA encoder exists. VQA Encoder v0.5 beta 2 by ugordan is the only known VQA encoder and luckily it works perfectly using Wine (I’m using version 1.2.3-0ubuntu1~ppa1) on Ubuntu 11.04. You’ll have to download some additional DLLs. Just do some research to find out which ones.

In order to use the software you need to convert your video to image files. I’ve had luck with converting the video to PCX files using FFMPEG:

ffmpeg -i infile.avi -sameq outfile_%03d.pcx

Then, in the VQA Encoder v0.5 beta 2 copy these options:

VQA encoder options

The program will automatically recognise that there are many images in the folder. After encoding has finished you should have a file called out_.vqa. In FFMPEG execute:

ffmpeg -i out_.vqa -sameq outfile.avi

You should now have a video that has similar compression to the Cinepak codec used with low bitrates:

VQA encoding
(Original video)

Brilliant! Well, not so brilliant. The problems with using this software are the following:

  • The software is no long being updated
  • Because of this it could stop working at any time and no support would be offered
  • It can only output video at 640×400, which you can see by the way it crops the video
  • It isn’t open source, though that only matters if you exclusively use open source software

So, is there any other way to achieve these compression artifacts, preferably using open source software?

What Glitch? scripts

For the What is Your Glitch? videos I wanted to build up on some of the extensive work that has already gone into the documentation, deconstruction and glitching of file formats. Rosa Menkman has already done a great job of documenting some of the more well-known file format glitches in the Vernacular of File Formats, which I recommend you all read. For this exercise I wanted to explore some of the more obscure file formats. Using open source software and Ubuntu has given me access to a wealth of programs that can still generate obscure file formats, such as pcx, pix and sgi. Through these experiments I also found inconsistencies in the way that different programs generate files, which is evident through my decision to use GIMP to convert files rather than Imagemagick in some of the scripts. Enough chit-chat, download the scripts!

Code hosted on GitHub

The method of glitching used in most of the scripts is the much-documented find and replace method. If you take a look in the scripts – and I encourage you to do so – you can change the characters that are being searched for and replaced. I’ve simply chosen characters that are sure to get results and are less likely to completely destroy the file.

Required Dependencies

Each script has its own set of dependencies, but to ensure you can run each one you’ll need the following:

  • Sed
  • GIMP – I use 2.71 beta available for Ubuntu from this ppa. Other versions remain untested
  • Imagemagick
  • GlitchSVG
  • FFMPEG
  • Mplayer
  • WebP

Basic Usage

1. Make the file executable: In a terminal type chmod+x [name of script] (e.g. what_glitch_webp.sh)
2. Run ./what_glitch_webp.sh in a terminal window
3. Drop a video file into terminal window and press Enter
4. Get a cup of tea

Notes

  • The scripts have only been tested on Ubuntu 10.10. If you are able to get them working with other operating systems please feel free to share your techniques
  • These scripts seem to work best with avi video files that are 24 or 25 frames per second. Files that are 30 frames per second get out of sync with the audio
  • Make sure the name of the directory containing the video to glitch doesn’t contain spaces e.g. “untitled_folder” instead of “untitled folder”
  • The video needs audio order for this script to work. If you know what you’re doing you can edit parts of this script for it to work on files that have no audio
  • As these scripts processes each frame of a video file it will take a very long time to complete. It is recommended for use only on small video clips!

These scripts by no means even begin to cover all of the image file formats available. There were a few formats that were not as easy to batch-process or were simply too large to process, such as xpm and xbm. For these you’ll have to do it manually or explore other ways of batch processing. They’re also not the most efficient of scripts. Some way into processing 400 video frames the script would slow down a lot. I welcome any bug fixes or suggestions on fixing this 😉

There’s still plenty of undiscovered glitches out there in the wild just waiting to be hunted down and exploited. I encourage anyone, everyone and their mother to pick from this long, but by no means complete list of image file formats and to find a way to glitch them!

Echobender

Myself and Mez recently finished a script called Echobender that automatically databends images.

Click to view on GitHub

To use it you’ll need:

  • A computer with Linux installed. I don’t have a Windows or Mac PC so I can’t test it on those
  • Sox. On Ubuntu you can install it via sudo apt-get install sox
  • Convert, which is part of ImageMagick. On Ubuntu you can install it via sudo apt-get install imagemagick

Once you have those installed just execute ./echobender.sh from the terminal and then drop a .jpg or .bmp file into it. The output will be in a folder called “echo”.

If you look closely at the script you can see a way to convert any data into an image! I’ll leave that one up to you… Here’s the source code for all those interested:

Thanks to Imbecil‘s MPegFucker script for much of the inspiration.

Databending using Audacity

Thanks to some help on the Audacity forum I finally know out how to use Audacity to databend. Previously I’d been using mhWavEdit, which has its limitations and just doesn’t feel as familiar as Audacity. From talk on the various databending discussion boards I found that people would often use tools like Cool Edit/Adobe Audition for their bends. Being on Linux and restricting myself to things that run natively (i.e. not under Wine) presented a new challenge. Part of my task was to replicate the methods others have found but under Linux. My ongoing quest is to find things that only Linux can do, which I’m sure I’ll find when I eventually figure out how to pipe data through one program into another!

Here’s some of my current results using Audacity:

Gabe, Abbey, L and me (by hellocatfood)

Liverpool (by hellocatfood)

Just so you don’t have to go trawling through the posts on the Audacity forum here’s how it’s done. It’s worth noting that this was on using Audacity 1.3.12-2 on Linux. Versions on other operating systems may be different. Before I show you this it’s probably better if you work with an uncompressed image format, such as .bmp or .tif. As jpgs are compressed data there’s always more chance of completely breaking a picture, rather than bending it. So, open up GIMP/your faviourite image editor and convert it to an uncompressed format. I’ll be using this picture I took at a Telepaphe gig awhile back.

Next, download Audacity. You don’t need the lame plugin as we wont be exporting to mp3, though grab it if you plan to use it for that feature in the future. Once you have it open go to File > Import > Raw Data and choose your file. What you’ll now be presented is with options on how to import this raw data, which is where I would usually fall flat.

Import Raw Data

Import Raw Data

Under Encoding you’ll need to select either U-Law or A-Law (remember which one you choose). When you choose any other format you’ll be converting the data into that format. Whilst you want to achieve data modification this is bad because it’ll convert the header of the image file, thereby breaking the image. U/A-Law just imports the data. The other settings do have significance but I wont go into that here. When you’re ready press Import and you’ll see your image as data!

Image as sound

Image as sound

Press play if you dare, but I’d place money on the fact that it’ll probably sound like either white noise or Aphex Twin glitchy goodness. This is where the fun can begin. For this tutorial select everything from about five seconds into the audio. The reason for this is because, just like editing an image in a text editor, the header is at the beginning of the file. Unless you know the size of the header and exactly where it ends (which you can find out with a bit of research), you can usually guess that it’s about a few seconds into the audio. The best way to find it out is to try it out!

Anyway, highlight that section and then go to Effect > Echo

Apply the echo

Leave the default settings as they are and press OK

You’ll see that your audio has changed visually. It still wont sound any better but the magic happens when you export it back to an image file, which is the next step.

Once you’re happy with your modifications go to File > Export. Choose a new location for your image and type in the proposed new file name but don’t press save just yet. You’ll need to change the export settings to match the import settings.

screenshot_11_16_110037

Change the file format to Other Uncompressed Files and then click on the Options button.

Export settings

Export settings

Change the settings to match the ones above (or to A-Law if you imported as A-Law). With that now all set you can now press Save! If you entered a file extension when you were choosing a file name you’ll get a warning about the file extension being incorrect, but you can ignore it and press Yes. If you didn’t choose a file extension, when the file is finished exporting, add the appropriate extension to the file. In my case I’d be adding .bmp to the end.

Here’s the finished image:

Freaky!

Freaky!

There’s of course so many different filters available in Audacity, so try each of them out! If you’re feeling really adventurous try importing two or more different images and then exporting them as a single image.

Comments on this post are now closed. If you need help on this try the Audacity forum

Ubuntu Bug Jam

Ubuntu Bug Jam

From Friday 2nd to Sunday many Ubuntu, Linux and Open Source enthusiasts descended upon the Linux Emporium to take part in the Ubuntu Bug Jam. In the words of an Ubuntu blogger, the Ubuntu Bug Jam is:

…a world-wide online and face-to-face event to get people together to fix Ubuntu bugs – we want to get as many people online fixing bugs, having a great time doing so, and putting their brick in the wall for free software. This is not only a great opportunity to really help Ubuntu, but to also get together with other Ubuntu fans to make a difference together, either via your LoCo team, your LUG, other free software group, or just getting people together in your house/apartment to fix bugs and have a great time.

This is the second time I’ve been to a bug jam. The first time I went I hadn’t even used Ubuntu, so only managed to report one bug and otherwise mostly focused on reporting stuff in Inkscape as I use it more often.

This time was a similar affair. Apart from testing out the beta of the next release of Ubuntu (the Karmic Koala) and asking for help in fixing bugs in my own system I mostly spent time testing bugs in Inkscape and suggesting features for future releases of Ubuntu.

Overall, I think reporting any bug in any package or program helps everyone and one thing I really like about open source is its transparency and honesty in its errors. That is, it’s not ashamed to admit that there are a few bugs here and there.