Create jpgs in Pure Data

For Some of My Favourite Songs I utilised Pure Data Extended (I’m using a beta version) to read the audio files and then save them as images. Pure Data is usually used for the production of music and/or generative live visuals, so to using it to produce jpg images from almost nothing, or random data input is quite new to me!

In search of a jpg header

The most important part of this process is knowing how to construct and apply a jpg header to data. Wikipedia informed me that all jpg images begin with FF D8. I thought that all I would need to do is use a hex editor, such as Ghex or Bless Hex Editor, to add those byte values to a file.

Unfortunately this is not the case at all. There’s so much more in a jpg header, such as Huffman Tables, Quantization Tables, bytes to define the width and height of an image, and much more that I still don’t quite understand.

I attempted to grab data from the beginning of a random jpg file, but this included lots of extraneous data such as camera make, program(s) used to modify the photo, gps data and creation date. This data amounted to several kilobytes, which is far too much data for a header. What I needed was a “vanilla” or plain header that I could apply to any file.

mesmeon showed me the HEADer REMIX project by Ted Davis. The header values on the left of the screen are used for glitching every image, be it the default image or one taken by a user.

I saved the default image, manually extracted the header image, ran it through exiftool and then ended up with a header for a 640×480 image that is only 588 bytes!

Enter Pure Data

Now that I had a vanilla header I had to devise a way to use it in Pure Data. The [binfile] object allows the reading and writing of binary data. Adding data to [binfile] is a case of sending a message containing numbers to the object.

[binfile] reads and outputs data as decimal values i.e. numbers from 0 to 255. I needed to find a way to add the decimal values of the vanilla header to a message box. Martin Meredith helped me with this whilst we were tackling bugs at the Ubuntu Global Jam. Using hexdump I was able to output all of the hex values to decimal values.

[bash]hexdump -v -e ‘1/1 "%02u "’ filename.here > decimalvalues.txt[/bash]

[bash toolbar=”true”]255 216 255 219 00 132 00 03 02 02 03 02 02 03 03 03 03 04 03 03 04 05 08 05 05 04 04 05 10 07 07 06 08 12 10 12 12 11 10 11 11 13 14 18 16 13 14 17 14 11 11 16 22 16 17 19 20 21 21 21 12 15 23 24 22 20 24 18 20 21 20 01 03 04 04 05 04 05 09 05 05 09 20 13 11 13 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 255 192 00 17 08 01 224 02 128 03 01 17 00 02 17 01 03 17 01 255 196 01 162 00 00 01 05 01 01 01 01 01 01 00 00 00 00 00 00 00 00 01 02 03 04 05 06 07 08 09 10 11 16 00 02 01 03 03 02 04 03 05 05 04 04 00 00 01 125 01 02 03 00 04 17 05 18 33 49 65 06 19 81 97 07 34 113 20 50 129 145 161 08 35 66 177 193 21 82 209 240 36 51 98 114 130 09 10 22 23 24 25 26 37 38 39 40 41 42 52 53 54 55 56 57 58 67 68 69 70 71 72 73 74 83 84 85 86 87 88 89 90 99 100 101 102 103 104 105 106 115 116 117 118 119 120 121 122 131 132 133 134 135 136 137 138 146 147 148 149 150 151 152 153 154 162 163 164 165 166 167 168 169 170 178 179 180 181 182 183 184 185 186 194 195 196 197 198 199 200 201 202 210 211 212 213 214 215 216 217 218 225 226 227 228 229 230 231 232 233 234 241 242 243 244 245 246 247 248 249 250 01 00 03 01 01 01 01 01 01 01 01 01 00 00 00 00 00 00 01 02 03 04 05 06 07 08 09 10 11 17 00 02 01 02 04 04 03 04 07 05 04 04 00 01 02 119 00 01 02 03 17 04 05 33 49 06 18 65 81 07 97 113 19 34 50 129 08 20 66 145 161 177 193 09 35 51 82 240 21 98 114 209 10 22 36 52 225 37 241 23 24 25 26 38 39 40 41 42 53 54 55 56 57 58 67 68 69 70 71 72 73 74 83 84 85 86 87 88 89 90 99 100 101 102 103 104 105 106 115 116 117 118 119 120 121 122 130 131 132 133 134 135 136 137 138 146 147 148 149 150 151 152 153 154 162 163 164 165 166 167 168 169 170 178 179 180 181 182 183 184 185 186 194 195 196 197 198 199 200 201 202 210 211 212 213 214 215 216 217 218 226 227 228 229 230 231 232 233 234 242 243 244 245 246 247 248 249 250 255 218 00 12 03 01 00 02 17 03 17 00 63[/bash]

(The output is sent to a text file for ease of copy/pasting)

With this output I copy/pasted the values into a message box, and whenever I needed to add a jpg header to a file I clicked on the message box! To then write the file I sent the message [write filename.jpg( to the [binfile] object.

A jpg header in Pure Data

Using this data alone you may notice that the jpg image doesn’t open in certain image viewers or is blank/black. That is because all that was added is the header. Image data is also needed! For this I added a few [metro]s to generate random numbers between 0-255. The output image then looks a little bit more colourful now.

jpg created by Pure Data

For some websites and image viewers the End Of Image bytes (FF D9/255 217) need to be added in order for it to be viewed properly. To start this process again send [clear( to [binfile]. This clears all binary data. Below is all of this theory put into one patch.

The finished Pure Data patch

Pure Data [binfile]

Generate jpg images – click to download

To use it, first click on the button to start the jpg file, then click the toggle button to add lots of random data. This may take a minute or so. Once done turn off the toggle, click on the button to end the file and then write the jpg image.

Further options

If you know the structure of a certain file type, in theory, it is possible to construct one in a similar way to this. I’ve already used this method to construct a bmp, but they produce far less interesting results. png files seem to be more fragile and, as such, I haven’t managed to create one using this method.

If you use a second [binfile] object you can load the bytes from another file and use them, in conjunction with random data, to produce glitchy – but slightly recognisable – images!

Is it also possible to reduce the size of the jpg header even further?

Create jpgs in SuperCollider

Holger Ballweg (uiae) has recreated this progress in SuperCollider. Check it out!

Feedback Loops in Pure Data

Recently I’ve been making a few video loops for Dreambait Recordings to use in their shows. The videos, made using video samples and Pure Data, focus on feedback loops. For BYOB Birmingham on Friday 16th March I decided to showcase these video feedbcak creations. Some photos of it in action:

BYOB Birmingham

BYOB Birmingham Flatpack Festival 2012

Photo by minuek

The Pure Data patch used to make these visuals, inspired by this patch is pretty simple: Put an object on screen, take a snapshot of the screen and then apply that snapshot as a texture to another object. You can download it below

Feedback Loops patch

Click to download

As a texture for the cube I used the Skin Cells video again. You could replace this with any video, image or webcam feed. The [pix_contrast] object is there purely to provide an over-saturated look (try bringing Saturation to a negative number). For BYOB I automated the controls by using random number generators (feeding [random] into [metro]). Here’s a render of what the audience saw:

All that is needed now is some cool audio to go with it! Thanks to all those that came to BYOB to see this and other awesome artworks!

Alpha Glitch

For my performance with Freecode as part of Network Music Festival I wanted to move away from producing visuals that consisted mostly of video playback and move towards generative art. Demos of this were posted on my Flickr site, and the first performance that utilised this new approach happened on 26th January

The feedback from people online and at the performance was really positive, with a lot of people were asking how to do something similar. The patch I made for it was very messy so I (albeit slowly) remade part of the patch that achieves that effect. It’s available for download below

Alpha Glitch

Click to Download

This isn’t strictly a generative patch as it still relies on an source image/video as a texture, but I think it’s more generative than it is video playback. The patch, made in Pure Data, works first by using [repeat] to generate many cubes which are zooming towards the screen. These, then, are textured with an image of your choice. The “magic” comes in the use of [pix_alpha]. The red, green and blue sliders remove a percentage of that colour from the image texturing the cubes, revealing the cube below. The green toggle button randomly removes a different percentage of each colour at different speeds. This, coupled with the constant movement of the cubes I think creates a sort of animated glitch using only a still image.

Sound confusing? Hopefully it’ll become clearer once you dissect the patch and view the help patches of each object. Here’s an example of the output from this patch using this image from my Skin Cells video:

If you know Pure Data well you can modify the patch so that it uses videos or a webcam feed instead of a still image. However, be aware that having that many objects on screen with a video stream can cause the output to be stuttery. This patch was made with Pure Data Extended 0.43 on Ubuntu 11.10.

Making Skin Cells

The making of Skin Cells was quite a long process. It started projecting my Bunnies video onto me and filming this. I then took this and ran it through the What Glitch? sgi script to create a glitched version of the video, leaving me with two versions of the video.

Skin Cells

Skin Cells

When it came to merging the two videos together I took some inspiration from Tidepool by Tabor Robak. Putting the videos on top of each other I wanted to use chromakeying to reveal parts of the video at the bottom at the same time as really oversaturating the video. For this I employed the help of Pure Data:

Skin Cells Pure Data patch

By using [pix_chroma_key] and setting the [range( to random values the patch was constantly hiding and revealing random parts of the videos. Some wizardry in Gridflow gave the videos that oversaturated look.

If you want to try this patch for yourself go ahead and download it. Although it may work on other setups, I used the following:

To use the patch, first load a directory of videos, create the GEM window and then press the big red start button. A video is automatically saved (using PDP), though do be careful as these files get very large very quickly! If, for any reason, saving the video doesn’t work just delete the line going from [#from_pix, colorspace rgb] to [#to_pdp].

If any assistance is required please direct your attention to this thread on the Pure Data forum.

LÖVE Glitches

Whilst I was in Venice for the Laptop Meets Musicians festival with BiLE I had the pleasure of (finally) meeting {rukano} who later showed me this really awesome way of displaying uncleared video memory with LOVE and LICK. I’m using Ubuntu 11.04 with LÖVE version love_0.7.2-0natty2_i386.deb.

LÖVE glitches

Once you have downloaded and installed LÖVE and LICK (instructions for different platforms are provided on their websites) create the following files:

main.lua

[sourcecode language=”bash”]require "LICK"
require "LICK/lib"
lick.reset = true
lick.clearFlag = true

function love.load()
fb = love.graphics.newFramebuffer(800,600)
end

function love.draw()
love.graphics.draw(fb, 0, 0)
end

function love.keypressed (a)
print(a)
if a == " " then
fb = love.graphics.newFramebuffer(800,600)
end
end[/sourcecode]

conf.lua

[sourcecode language=”bash”]function love.conf(t)
t.modules.joystick = true — Enable the joystick module (boolean)
t.modules.audio = true — Enable the audio module (boolean)
t.modules.keyboard = true — Enable the keyboard module (boolean)
t.modules.event = true — Enable the event module (boolean)
t.modules.image = true — Enable the image module (boolean)
t.modules.graphics = true — Enable the graphics module (boolean)
t.modules.timer = true — Enable the timer module (boolean)
t.modules.mouse = true — Enable the mouse module (boolean)
t.modules.sound = true — Enable the sound module (boolean)
t.modules.physics = true — Enable the physics module (boolean)
t.console = false — Attach a console (boolean, Windows only)
t.title = "live_testproject" — The title of the window the game is in (string)
t.author = "Your Name Here" — The author of the game (string)
t.screen.fullscreen = false — Enable fullscreen (boolean)
t.screen.vsync = true — Enable vertical sync (boolean)
t.screen.fsaa = 0 — The number of FSAA-buffers (number)
t.screen.height = 600 — The window height (number)
t.screen.width = 800 — The window width (number)
t.version = 0 — The LÖVE version this game was made for (number)
end[/sourcecode]

Compile all of this code into something like Glitch.love. Instructions for this may be different for different operating systems. Before launching the program be sure to first open lots of videos and images. Once you’ve done that, launch the Glitch.love program and press spacebar to cycle through your uncleared video memory!

Shoutouts go to Tilmann Hars, who first showed this trick to rukano and who maintains the LICK library.

p.s. I’m still trying to find out how to do this kind of stuff using Pure Data. If anyone knows how please let me know!

Adventures in Vector Quantization

Ever since seeing Radio Dada by Rosa Menkman I’ve been forever trying to reproduce the style of compression/glitches it uses.

In my limited knowledge about the production of the video I do know what it uses compression artifacts found in the Cinepak codec. So, I set out to try and find a way of converting a video to a video that uses the Cinepak codec. If you’ve been following me you’ll that I’ve asked for help on many fora and mailing lists for help with initially little success.

Hidden somewhere in the documentation for MEncoder is a page detailing how to use Windows codecs on Linux for encoding. The copy of the Cinepak codec (iccvid.dll) that came with MEncoder/medibuntu was a bit broken so I had to use Google to download a new version.

Once I had that I used MEncoder to convert a video to an avi with the Cinepak codec. (I’m using mencoder version 2:1.0~svn33951~natty):

[sourcecode language=”bash”]mencoder infile.avi -ovc vfw -xvfwopts codec=iccvid.dll -oac mp3lame -o outfile.avi[/sourcecode]

Unfortunately for me this did not produce the compression artifacts that I was after. I tried reencoding the video using the Cinepak codec several times but this only just made the video darker:


(Original video)

Also, my attempt to encode the video using the Cinepak codec but with a low bitrate didn’t work as, at least when using MEncoder, the codec doesn’t have any encoding options. Drats! With that said, if anyone knows of a way of encoding using Cinepak with low/different bitrates on Linux using only freely available/open source software please do let me/the world know.

After this I felt very disheartened until I did a little bit of digging into the actual codec. I discovered that this codec is one of a few is based on Vector Quantization. I don’t know much about this but I felt that this must be the key. The video codecs that are based on Vector Quantization are Sorenson, Indeo and VQA.

I had no luck finding a way of converting to Sorenson and Indeo. However, I’ve had more luck with VQA. Wikipedia has a bit of information on the codec:

Vector Quantized Animation, known by its acronym VQA is a file format originally developed by Westwood Studios for video encoding in their game The Legend of Kyrandia and monopoly.

If you ever came across a Sega Saturn you probably will have come across videos encoded using VQA. As that Wikipedia article states, apart from the one used by Westwood Studios, only one VQA encoder exists. VQA Encoder v0.5 beta 2 by ugordan is the only known VQA encoder and luckily it works perfectly using Wine (I’m using version 1.2.3-0ubuntu1~ppa1) on Ubuntu 11.04. You’ll have to download some additional DLLs. Just do some research to find out which ones.

In order to use the software you need to convert your video to image files. I’ve had luck with converting the video to PCX files using FFMPEG:

[sourcecode language=”bash”]ffmpeg -i infile.avi -sameq outfile_%03d.pcx[/sourcecode]

Then, in the VQA Encoder v0.5 beta 2 copy these options:

VQA encoder options

The program will automatically recognise that there are many images in the folder. After encoding has finished you should have a file called out_.vqa. In FFMPEG execute:

[sourcecode language=”bash”]ffmpeg -i out_.vqa -sameq outfile.avi[/sourcecode]

You should now have a video that has similar compression to the Cinepak codec used with low bitrates:


(Original video)

Brilliant! Well, not so brilliant. The problems with using this software are the following:

  • The software is no long being updated
  • Because of this it could stop working at any time and no support would be offered
  • It can only output video at 640×400, which you can see by the way it crops the video
  • It isn’t open source, though that only matters if you exclusively use open source software

So, is there any other way to achieve these compression artifacts, preferably using open source software?

What Glitch? scripts

For the What is Your Glitch? videos I wanted to build up on some of the extensive work that has already gone into the documentation, deconstruction and glitching of file formats. Rosa Menkman has already done a great job of documenting some of the more well-known file format glitches in the Vernacular of File Formats, which I recommend you all read. For this exercise I wanted to explore some of the more obscure file formats. Using open source software and Ubuntu has given me access to a wealth of programs that can still generate obscure file formats, such as pcx, pix and sgi. Through these experiments I also found inconsistencies in the way that different programs generate files, which is evident through my decision to use GIMP to convert files rather than Imagemagick in some of the scripts. Enough chit-chat, download the scripts!

Code hosted on GitHub

The method of glitching used in most of the scripts is the much-documented find and replace method. If you take a look in the scripts – and I encourage you to do so – you can change the characters that are being searched for and replaced. I’ve simply chosen characters that are sure to get results and are less likely to completely destroy the file.

Required Dependencies

Each script has its own set of dependencies, but to ensure you can run each one you’ll need the following:

  • Sed
  • GIMP – I use 2.71 beta available for Ubuntu from this ppa. Other versions remain untested
  • Imagemagick
  • GlitchSVG
  • FFMPEG
  • Mplayer
  • WebP

Basic Usage

1. Make the file executable: In a terminal type chmod+x [name of script] (e.g. what_glitch_webp.sh)
2. Run ./what_glitch_webp.sh in a terminal window
3. Drop a video file into terminal window and press Enter
4. Get a cup of tea

Notes

  • The scripts have only been tested on Ubuntu 10.10. If you are able to get them working with other operating systems please feel free to share your techniques
  • These scripts seem to work best with avi video files that are 24 or 25 frames per second. Files that are 30 frames per second get out of sync with the audio
  • Make sure the name of the directory containing the video to glitch doesn’t contain spaces e.g. “untitled_folder” instead of “untitled folder”
  • The video needs audio order for this script to work. If you know what you’re doing you can edit parts of this script for it to work on files that have no audio
  • As these scripts processes each frame of a video file it will take a very long time to complete. It is recommended for use only on small video clips!

These scripts by no means even begin to cover all of the image file formats available. There were a few formats that were not as easy to batch-process or were simply too large to process, such as xpm and xbm. For these you’ll have to do it manually or explore other ways of batch processing. They’re also not the most efficient of scripts. Some way into processing 400 video frames the script would slow down a lot. I welcome any bug fixes or suggestions on fixing this 😉

There’s still plenty of undiscovered glitches out there in the wild just waiting to be hunted down and exploited. I encourage anyone, everyone and their mother to pick from this long, but by no means complete list of image file formats and to find a way to glitch them!

Create your own glitch typeface

Making Dataface was really quite an exciting journey. What started off as an attempt to make a typeface inspired by glitch art turned out to be a story of collaboration, exploration and hours of research. Here, I will go through my process.

As you may have seen from my previous experiments in vector databending it’s totally possible to manipulate vector files. My original method for creating Dataface was to save each glyph in the Liberation font to an SVG file and then go through the process of glitching it for each file. Obviously this would’ve taken me a long time, hence why there was very little activity between my original announcement in January and when I started work on it again a few weeks ago.

At this time I thought about writing a script to do this for me. sed is a great command-line utility for Linux that essentially does the same as using find/replace on a character. As it’s command-line it means I can do a lot of automation with it. So, I wrote this simple script that attempted to solve the problem

[sourcecode language=”bash”]#!/bin/bash
rand=$(($RANDOM % 9))
sed -i s/[0-9]/$rand/g fontfile.svg[/sourcecode]

The only problem was that it would replace all numbers in the file with whatever random value was chosen by $rand as the script was executed. Not only is this bad because it would result in a lot of strangely similar glyphs but also because it would modify the header data of the font file, thus rendering it unreadable. I soon remembered that recently the SVG Font specification was finished, which aided my cause by putting all of the glyphs in one big file, but I still couldn’t find a way to efficiently randomise values in the file.

Thankfully fizzPOP came to my rescue. I’m glad that hackerspaces have people with a range of abilities in hardware and software, as I was soon presented with a solution to my problem by GB. After a few revisions he created a script that would replace only specific values in the file and wold even let you specify how much it should be randomised. You can download the finished script and source files and have a go for yourself.

Click to download

 

Simplified instructions on compiling the script:

  • Unzip the file in a clean folder. This will give you three files:Font_Sample_-_Liberation_Sans.svg, glitch.l and makefile
  • Type “make” into the command line (without the quote)
  • If you haven’t got make, type:
    [sourcecode language=”bash”]flex -t glitch.l >glitch.c[/sourcecode]

    [sourcecode language=”bash”]gcc -o glitch glitch.c[/sourcecode]

in either case, you will get a program called “glitch”.

Please note this has only been tested on Linux, requires Flex (available in the Ubuntu repository) and it is designed to work on SVG font files. I only know FontForge that is able to create these fonts files. To run the script do the following

[sourcecode language=”bash”]./glitch 0.50 outputfile.svg[/sourcecode]

That tells the script to glitch the file by 50%. I have noticed that sometimes you get errors if you put in 1.00 or more.

Once you have generated the file you can import it back into FontForge to save as a .ttf, .otf or whatever font type you choose!

(I still hate Comic Sans)

Here’s everyone’s favourite Comic Sans glitched at 50%

Streams of data

One of my overall goals is to find a way to databend live video. I’m sure there’s a way to do it with Processing and PureData but I’m not yet proficient in those programs so they’re out of the question for now. In the meantime I thought to try and hack the Echobender script to databend my webcam images.

>tonyg provides a great tutorial on how to convert live webcam images into audio, which I’ve used as a starting point for my hack.

The process for making it works is as follows:

  • Images from the webcam are saved to the computer
  • These are converted to a .bmp file then renamed to a .raw file
  • Sox applies an audio effect to the .raw file
  • The .raw file is converted back to a .bmp then to a .jpg
  • The updated webcam image is displayed to a window and updated once every second

Sound overly complicated? It probably is. Like the Echobender script you’ll need ImageMagick and Sox but we’ll also be using Webcam, which you can install via sudo apt-get install webcam

If you haven’t already, create a file called .webcamrc in your home directory (/home/yourusername) and enter this text into it:

[sourcecode][grab]
delay = 0
text = “”

[ftp]
local = 1
tmp = uploading.jpg
file = webcam.jpg
dir = .
debug = 1[/sourcecode]

Now create a file called grabframe, place it in your home directory and fill it with this:

[sourcecode language=”bash”]#!/bin/sh

while [ ! -e webcam.jpg ]; do sleep 0.1; done
convert webcam.jpg frame.bmp
cp frame.bmp frame.raw
sox -r 482170 -e u-law frame.raw frame2.raw echos 0.8 0.9 5000 0.3 1800 0.25
convert -size 640×240 -depth 4 rgb:frame2.raw -trim -flip -flop output.bmp
convert output-0.bmp output.jpg[/sourcecode]

To start things running, open up three terminal instances:

  • In shell number one, run webcam.
  • In shell number two, run “while true; do ./grabframe ; done.
  • In shell number three, run display -update 1 output.jpg

Voila!

I know it’s quite slow, but I haven’t yet found a way to update faster and it’ll still be restricted by the time it takes Sox/ImageMagick to perform their conversions.

Thanks again to tonyg, Imbecil and Mez for their help and inspiration

Databending using Audacity

Thanks to some help on the Audacity forum I finally know out how to use Audacity to databend. Previously I’d been using mhWavEdit, which has its limitations and just doesn’t feel as familiar as Audacity. From talk on the various databending discussion boards I found that people would often use tools like Cool Edit/Adobe Audition for their bends. Being on Linux and restricting myself to things that run natively (i.e. not under Wine) presented a new challenge. Part of my task was to replicate the methods others have found but under Linux. My ongoing quest is to find things that only Linux can do, which I’m sure I’ll find when I eventually figure out how to pipe data through one program into another!

Here’s some of my current results using Audacity:

Gabe, Abbey, L and me (by hellocatfood)

Liverpool (by hellocatfood)

Just so you don’t have to go trawling through the posts on the Audacity forum here’s how it’s done. It’s worth noting that this was on using Audacity 1.3.12-2 on Linux. Versions on other operating systems may be different. Before I show you this it’s probably better if you work with an uncompressed image format, such as .bmp or .tif. As jpgs are compressed data there’s always more chance of completely breaking a picture, rather than bending it. So, open up GIMP/your faviourite image editor and convert it to an uncompressed format. I’ll be using this picture I took at a Telepaphe gig awhile back.

Next, download Audacity. You don’t need the lame plugin as we wont be exporting to mp3, though grab it if you plan to use it for that feature in the future. Once you have it open go to File > Import > Raw Data and choose your file. What you’ll now be presented is with options on how to import this raw data, which is where I would usually fall flat.

Import Raw Data

Import Raw Data

Under Encoding you’ll need to select either U-Law or A-Law (remember which one you choose). When you choose any other format you’ll be converting the data into that format. Whilst you want to achieve data modification this is bad because it’ll convert the header of the image file, thereby breaking the image. U/A-Law just imports the data. The other settings do have significance but I wont go into that here. When you’re ready press Import and you’ll see your image as data!

Image as sound

Image as sound

Press play if you dare, but I’d place money on the fact that it’ll probably sound like either white noise or Aphex Twin glitchy goodness. This is where the fun can begin. For this tutorial select everything from about five seconds into the audio. The reason for this is because, just like editing an image in a text editor, the header is at the beginning of the file. Unless you know the size of the header and exactly where it ends (which you can find out with a bit of research), you can usually guess that it’s about a few seconds into the audio. The best way to find it out is to try it out!

Anyway, highlight that section and then go to Effect > Echo

Apply the echo

Leave the default settings as they are and press OK

You’ll see that your audio has changed visually. It still wont sound any better but the magic happens when you export it back to an image file, which is the next step.

Once you’re happy with your modifications go to File > Export. Choose a new location for your image and type in the proposed new file name but don’t press save just yet. You’ll need to change the export settings to match the import settings.

screenshot_11_16_110037

Change the file format to Other Uncompressed Files and then click on the Options button.

Export settings

Export settings

Change the settings to match the ones above (or to A-Law if you imported as A-Law). With that now all set you can now press Save! If you entered a file extension when you were choosing a file name you’ll get a warning about the file extension being incorrect, but you can ignore it and press Yes. If you didn’t choose a file extension, when the file is finished exporting, add the appropriate extension to the file. In my case I’d be adding .bmp to the end.

Here’s the finished image:

Freaky!

Freaky!

There’s of course so many different filters available in Audacity, so try each of them out! If you’re feeling really adventurous try importing two or more different images and then exporting them as a single image.

Comments on this post are now closed. If you need help on this try the Audacity forum