Alpha Glitch

For my performance with Freecode as part of Network Music Festival I wanted to move away from producing visuals that consisted mostly of video playback and move towards generative art. Demos of this were posted on my Flickr site, and the first performance that utilised this new approach happened on 26th January

The feedback from people online and at the performance was really positive, with a lot of people were asking how to do something similar. The patch I made for it was very messy so I (albeit slowly) remade part of the patch that achieves that effect. It’s available for download below

Alpha Glitch

Click to Download

This isn’t strictly a generative patch as it still relies on an source image/video as a texture, but I think it’s more generative than it is video playback. The patch, made in Pure Data, works first by using [repeat] to generate many cubes which are zooming towards the screen. These, then, are textured with an image of your choice. The “magic” comes in the use of [pix_alpha]. The red, green and blue sliders remove a percentage of that colour from the image texturing the cubes, revealing the cube below. The green toggle button randomly removes a different percentage of each colour at different speeds. This, coupled with the constant movement of the cubes I think creates a sort of animated glitch using only a still image.

Sound confusing? Hopefully it’ll become clearer once you dissect the patch and view the help patches of each object. Here’s an example of the output from this patch using this image from my Skin Cells video:

If you know Pure Data well you can modify the patch so that it uses videos or a webcam feed instead of a still image. However, be aware that having that many objects on screen with a video stream can cause the output to be stuttery. This patch was made with Pure Data Extended 0.43 on Ubuntu 11.10.

Making Skin Cells

The making of Skin Cells was quite a long process. It started projecting my Bunnies video onto me and filming this. I then took this and ran it through the What Glitch? sgi script to create a glitched version of the video, leaving me with two versions of the video.

Skin Cells

Skin Cells

When it came to merging the two videos together I took some inspiration from Tidepool by Tabor Robak. Putting the videos on top of each other I wanted to use chromakeying to reveal parts of the video at the bottom at the same time as really oversaturating the video. For this I employed the help of Pure Data:

Skin Cells Pure Data patch

By using [pix_chroma_key] and setting the [range( to random values the patch was constantly hiding and revealing random parts of the videos. Some wizardry in Gridflow gave the videos that oversaturated look.

If you want to try this patch for yourself go ahead and download it. Although it may work on other setups, I used the following:

To use the patch, first load a directory of videos, create the GEM window and then press the big red start button. A video is automatically saved (using PDP), though do be careful as these files get very large very quickly! If, for any reason, saving the video doesn’t work just delete the line going from [#from_pix, colorspace rgb] to [#to_pdp].

If any assistance is required please direct your attention to this thread on the Pure Data forum.

LÖVE Glitches

Whilst I was in Venice for the Laptop Meets Musicians festival with BiLE I had the pleasure of (finally) meeting {rukano} who later showed me this really awesome way of displaying uncleared video memory with LOVE and LICK. I’m using Ubuntu 11.04 with LÖVE version love_0.7.2-0natty2_i386.deb.

LÖVE glitches

Once you have downloaded and installed LÖVE and LICK (instructions for different platforms are provided on their websites) create the following files:

main.lua

require "LICK"
require "LICK/lib"
lick.reset = true
lick.clearFlag = true

function love.load()
  fb = love.graphics.newFramebuffer(800,600)
end

function love.draw()
  love.graphics.draw(fb, 0, 0)
end

function love.keypressed (a)
  print(a)
  if a == " " then
     fb = love.graphics.newFramebuffer(800,600)
  end
end

conf.lua

function love.conf(t)
   t.modules.joystick = true   -- Enable the joystick module (boolean)
   t.modules.audio = true      -- Enable the audio module (boolean)
   t.modules.keyboard = true   -- Enable the keyboard module (boolean)
   t.modules.event = true      -- Enable the event module (boolean)
   t.modules.image = true      -- Enable the image module (boolean)
   t.modules.graphics = true   -- Enable the graphics module (boolean)
   t.modules.timer = true      -- Enable the timer module (boolean)
   t.modules.mouse = true      -- Enable the mouse module (boolean)
   t.modules.sound = true      -- Enable the sound module (boolean)
   t.modules.physics = true    -- Enable the physics module (boolean)
   t.console = false           -- Attach a console (boolean, Windows only)
   t.title = "live_testproject"        -- The title of the window the game is in (string)
   t.author = "Your Name Here"        -- The author of the game (string)
   t.screen.fullscreen = false -- Enable fullscreen (boolean)
   t.screen.vsync = true       -- Enable vertical sync (boolean)
   t.screen.fsaa = 0           -- The number of FSAA-buffers (number)
   t.screen.height = 600       -- The window height (number)
   t.screen.width = 800        -- The window width (number)
   t.version = 0               -- The LÖVE version this game was made for (number)
end

Compile all of this code into something like Glitch.love. Instructions for this may be different for different operating systems. Before launching the program be sure to first open lots of videos and images. Once you’ve done that, launch the Glitch.love program and press spacebar to cycle through your uncleared video memory!

Shoutouts go to Tilmann Hars, who first showed this trick to rukano and who maintains the LICK library.

p.s. I’m still trying to find out how to do this kind of stuff using Pure Data. If anyone knows how please let me know!

Adventures in Vector Quantization

Ever since seeing Radio Dada by Rosa Menkman I’ve been forever trying to reproduce the style of compression/glitches it uses.

In my limited knowledge about the production of the video I do know what it uses compression artifacts found in the Cinepak codec. So, I set out to try and find a way of converting a video to a video that uses the Cinepak codec. If you’ve been following me you’ll that I’ve asked for help on many fora and mailing lists for help with initially little success.

Hidden somewhere in the documentation for MEncoder is a page detailing how to use Windows codecs on Linux for encoding. The copy of the Cinepak codec (iccvid.dll) that came with MEncoder/medibuntu was a bit broken so I had to use Google to download a new version.

Once I had that I used MEncoder to convert a video to an avi with the Cinepak codec. (I’m using mencoder version 2:1.0~svn33951~natty):

mencoder infile.avi -ovc vfw -xvfwopts codec=iccvid.dll -oac mp3lame -o outfile.avi

Unfortunately for me this did not produce the compression artifacts that I was after. I tried reencoding the video using the Cinepak codec several times but this only just made the video darker:


(Original video)

Also, my attempt to encode the video using the Cinepak codec but with a low bitrate didn’t work as, at least when using MEncoder, the codec doesn’t have any encoding options. Drats! With that said, if anyone knows of a way of encoding using Cinepak with low/different bitrates on Linux using only freely available/open source software please do let me/the world know.

After this I felt very disheartened until I did a little bit of digging into the actual codec. I discovered that this codec is one of a few is based on Vector Quantization. I don’t know much about this but I felt that this must be the key. The video codecs that are based on Vector Quantization are Sorenson, Indeo and VQA.

I had no luck finding a way of converting to Sorenson and Indeo. However, I’ve had more luck with VQA. Wikipedia has a bit of information on the codec:

Vector Quantized Animation, known by its acronym VQA is a file format originally developed by Westwood Studios for video encoding in their game The Legend of Kyrandia and monopoly.

If you ever came across a Sega Saturn you probably will have come across videos encoded using VQA. As that Wikipedia article states, apart from the one used by Westwood Studios, only one VQA encoder exists. VQA Encoder v0.5 beta 2 by ugordan is the only known VQA encoder and luckily it works perfectly using Wine (I’m using version 1.2.3-0ubuntu1~ppa1) on Ubuntu 11.04. You’ll have to download some additional DLLs. Just do some research to find out which ones.

In order to use the software you need to convert your video to image files. I’ve had luck with converting the video to PCX files using FFMPEG:

ffmpeg -i infile.avi -sameq outfile_%03d.pcx

Then, in the VQA Encoder v0.5 beta 2 copy these options:

VQA encoder options

The program will automatically recognise that there are many images in the folder. After encoding has finished you should have a file called out_.vqa. In FFMPEG execute:

ffmpeg -i out_.vqa -sameq outfile.avi

You should now have a video that has similar compression to the Cinepak codec used with low bitrates:


(Original video)

Brilliant! Well, not so brilliant. The problems with using this software are the following:

  • The software is no long being updated
  • Because of this it could stop working at any time and no support would be offered
  • It can only output video at 640×400, which you can see by the way it crops the video
  • It isn’t open source, though that only matters if you exclusively use open source software

So, is there any other way to achieve these compression artifacts, preferably using open source software?

What Glitch? scripts

For the What is Your Glitch? videos I wanted to build up on some of the extensive work that has already gone into the documentation, deconstruction and glitching of file formats. Rosa Menkman has already done a great job of documenting some of the more well-known file format glitches in the Vernacular of File Formats, which I recommend you all read. For this exercise I wanted to explore some of the more obscure file formats. Using open source software and Ubuntu has given me access to a wealth of programs that can still generate obscure file formats, such as pcx, pix and sgi. Through these experiments I also found inconsistencies in the way that different programs generate files, which is evident through my decision to use GIMP to convert files rather than Imagemagick in some of the scripts. Enough chit-chat, download the scripts!

Code hosted on GitHub

The method of glitching used in most of the scripts is the much-documented find and replace method. If you take a look in the scripts – and I encourage you to do so – you can change the characters that are being searched for and replaced. I’ve simply chosen characters that are sure to get results and are less likely to completely destroy the file.

Required Dependencies

Each script has its own set of dependencies, but to ensure you can run each one you’ll need the following:

  • Sed
  • GIMP – I use 2.71 beta available for Ubuntu from this ppa. Other versions remain untested
  • Imagemagick
  • GlitchSVG
  • FFMPEG
  • Mplayer
  • WebP

Basic Usage

1. Make the file executable: In a terminal type chmod+x [name of script] (e.g. what_glitch_webp.sh)
2. Run ./what_glitch_webp.sh in a terminal window
3. Drop a video file into terminal window and press Enter
4. Get a cup of tea

Notes

  • The scripts have only been tested on Ubuntu 10.10. If you are able to get them working with other operating systems please feel free to share your techniques
  • These scripts seem to work best with avi video files that are 24 or 25 frames per second. Files that are 30 frames per second get out of sync with the audio
  • Make sure the name of the directory containing the video to glitch doesn’t contain spaces e.g. “untitled_folder” instead of “untitled folder”
  • The video needs audio order for this script to work. If you know what you’re doing you can edit parts of this script for it to work on files that have no audio
  • As these scripts processes each frame of a video file it will take a very long time to complete. It is recommended for use only on small video clips!

These scripts by no means even begin to cover all of the image file formats available. There were a few formats that were not as easy to batch-process or were simply too large to process, such as xpm and xbm. For these you’ll have to do it manually or explore other ways of batch processing. They’re also not the most efficient of scripts. Some way into processing 400 video frames the script would slow down a lot. I welcome any bug fixes or suggestions on fixing this 😉

There’s still plenty of undiscovered glitches out there in the wild just waiting to be hunted down and exploited. I encourage anyone, everyone and their mother to pick from this long, but by no means complete list of image file formats and to find a way to glitch them!

Create your own glitch typeface

Making Dataface was really quite an exciting journey. What started off as an attempt to make a typeface inspired by glitch art turned out to be a story of collaboration, exploration and hours of research. Here, I will go through my process.

As you may have seen from my previous experiments in vector databending it’s totally possible to manipulate vector files. My original method for creating Dataface was to save each glyph in the Liberation font to an SVG file and then go through the process of glitching it for each file. Obviously this would’ve taken me a long time, hence why there was very little activity between my original announcement in January and when I started work on it again a few weeks ago.

At this time I thought about writing a script to do this for me. sed is a great command-line utility for Linux that essentially does the same as using find/replace on a character. As it’s command-line it means I can do a lot of automation with it. So, I wrote this simple script that attempted to solve the problem

#!/bin/bash
rand=$(($RANDOM % 9))
sed -i s/[0-9]/$rand/g fontfile.svg

The only problem was that it would replace all numbers in the file with whatever random value was chosen by $rand as the script was executed. Not only is this bad because it would result in a lot of strangely similar glyphs but also because it would modify the header data of the font file, thus rendering it unreadable. I soon remembered that recently the SVG Font specification was finished, which aided my cause by putting all of the glyphs in one big file, but I still couldn’t find a way to efficiently randomise values in the file.

Thankfully fizzPOP came to my rescue. I’m glad that hackerspaces have people with a range of abilities in hardware and software, as I was soon presented with a solution to my problem by GB. After a few revisions he created a script that would replace only specific values in the file and wold even let you specify how much it should be randomised. You can download the finished script and source files and have a go for yourself.

Click to download

 

Simplified instructions on compiling the script:

  • Unzip the file in a clean folder. This will give you three files:Font_Sample_-_Liberation_Sans.svg, glitch.l and makefile
  • Type “make” into the command line (without the quote)
  • If you haven’t got make, type:
    flex -t glitch.l >glitch.c
    gcc -o glitch glitch.c

in either case, you will get a program called “glitch”.

Please note this has only been tested on Linux, requires Flex (available in the Ubuntu repository) and it is designed to work on SVG font files. I only know FontForge that is able to create these fonts files. To run the script do the following

./glitch 0.50 outputfile.svg

That tells the script to glitch the file by 50%. I have noticed that sometimes you get errors if you put in 1.00 or more.

Once you have generated the file you can import it back into FontForge to save as a .ttf, .otf or whatever font type you choose!

(I still hate Comic Sans)

Here’s everyone’s favourite Comic Sans glitched at 50%

Streams of data

One of my overall goals is to find a way to databend live video. I’m sure there’s a way to do it with Processing and PureData but I’m not yet proficient in those programs so they’re out of the question for now. In the meantime I thought to try and hack the Echobender script to databend my webcam images.

>tonyg provides a great tutorial on how to convert live webcam images into audio, which I’ve used as a starting point for my hack.

The process for making it works is as follows:

  • Images from the webcam are saved to the computer
  • These are converted to a .bmp file then renamed to a .raw file
  • Sox applies an audio effect to the .raw file
  • The .raw file is converted back to a .bmp then to a .jpg
  • The updated webcam image is displayed to a window and updated once every second

Sound overly complicated? It probably is. Like the Echobender script you’ll need ImageMagick and Sox but we’ll also be using Webcam, which you can install via sudo apt-get install webcam

If you haven’t already, create a file called .webcamrc in your home directory (/home/yourusername) and enter this text into it:

[grab]
delay = 0
text = “”

[ftp]
local = 1
tmp = uploading.jpg
file = webcam.jpg
dir = .
debug = 1

Now create a file called grabframe, place it in your home directory and fill it with this:

#!/bin/sh

while [ ! -e webcam.jpg ]; do sleep 0.1; done
convert webcam.jpg frame.bmp
cp frame.bmp frame.raw
sox -r 482170 -e u-law frame.raw frame2.raw echos 0.8 0.9 5000 0.3 1800 0.25
convert -size 640x240 -depth 4 rgb:frame2.raw -trim -flip -flop output.bmp
convert output-0.bmp output.jpg

To start things running, open up three terminal instances:

  • In shell number one, run webcam.
  • In shell number two, run “while true; do ./grabframe ; done.
  • In shell number three, run display -update 1 output.jpg

Voila!

I know it’s quite slow, but I haven’t yet found a way to update faster and it’ll still be restricted by the time it takes Sox/ImageMagick to perform their conversions.

Thanks again to tonyg, Imbecil and Mez for their help and inspiration

Databending using Audacity

Thanks to some help on the Audacity forum I finally know out how to use Audacity to databend. Previously I’d been using mhWavEdit, which has its limitations and just doesn’t feel as familiar as Audacity. From talk on the various databending discussion boards I found that people would often use tools like Cool Edit/Adobe Audition for their bends. Being on Linux and restricting myself to things that run natively (i.e. not under Wine) presented a new challenge. Part of my task was to replicate the methods others have found but under Linux. My ongoing quest is to find things that only Linux can do, which I’m sure I’ll find when I eventually figure out how to pipe data through one program into another!

Here’s some of my current results using Audacity:

Gabe, Abbey, L and me (by hellocatfood)

Liverpool (by hellocatfood)

Just so you don’t have to go trawling through the posts on the Audacity forum here’s how it’s done. It’s worth noting that this was on using Audacity 1.3.12-2 on Linux. Versions on other operating systems may be different. Before I show you this it’s probably better if you work with an uncompressed image format, such as .bmp or .tif. As jpgs are compressed data there’s always more chance of completely breaking a picture, rather than bending it. So, open up GIMP/your faviourite image editor and convert it to an uncompressed format. I’ll be using this picture I took at a Telepaphe gig awhile back.

Next, download Audacity. You don’t need the lame plugin as we wont be exporting to mp3, though grab it if you plan to use it for that feature in the future. Once you have it open go to File > Import > Raw Data and choose your file. What you’ll now be presented is with options on how to import this raw data, which is where I would usually fall flat.

Import Raw Data

Import Raw Data

Under Encoding you’ll need to select either U-Law or A-Law (remember which one you choose). When you choose any other format you’ll be converting the data into that format. Whilst you want to achieve data modification this is bad because it’ll convert the header of the image file, thereby breaking the image. U/A-Law just imports the data. The other settings do have significance but I wont go into that here. When you’re ready press Import and you’ll see your image as data!

Image as sound

Image as sound

Press play if you dare, but I’d place money on the fact that it’ll probably sound like either white noise or Aphex Twin glitchy goodness. This is where the fun can begin. For this tutorial select everything from about five seconds into the audio. The reason for this is because, just like editing an image in a text editor, the header is at the beginning of the file. Unless you know the size of the header and exactly where it ends (which you can find out with a bit of research), you can usually guess that it’s about a few seconds into the audio. The best way to find it out is to try it out!

Anyway, highlight that section and then go to Effect > Echo

Apply the echo

Leave the default settings as they are and press OK

You’ll see that your audio has changed visually. It still wont sound any better but the magic happens when you export it back to an image file, which is the next step.

Once you’re happy with your modifications go to File > Export. Choose a new location for your image and type in the proposed new file name but don’t press save just yet. You’ll need to change the export settings to match the import settings.

screenshot_11_16_110037

Change the file format to Other Uncompressed Files and then click on the Options button.

Export settings

Export settings

Change the settings to match the ones above (or to A-Law if you imported as A-Law). With that now all set you can now press Save! If you entered a file extension when you were choosing a file name you’ll get a warning about the file extension being incorrect, but you can ignore it and press Yes. If you didn’t choose a file extension, when the file is finished exporting, add the appropriate extension to the file. In my case I’d be adding .bmp to the end.

Here’s the finished image:

Freaky!

Freaky!

There’s of course so many different filters available in Audacity, so try each of them out! If you’re feeling really adventurous try importing two or more different images and then exporting them as a single image.

Comments on this post are now closed. If you need help on this try the Audacity forum

Making a Disco Ball using Blender and Inkscape

Awhile back I started doing a few experiments using Blender and Inkscape together. One of my creations from this was a ball.

Blender/Inkscape Sphere (by hellocatfood)

Recently one Inkscape user created a tutorial describing how to make a disco ball directly in Inkscape. Looking back at that ball that I made it kinda resembles a disco ball, so I decided to write a tutorial on how I did it.

This tutorial assumes that you know at least something about Blender and Inkscape. If not, go look at these tutorials for Inkscape and these tutorials for Blender. As with any program, the more you use it, the better you get at it.

We’re going to need three things before we begin. First install Blender. It’s available for Mac, Windows, Linux and probably any other system you can think of. Did I mention that it’s completely free? Next, install the VRM plugin for Blender. This is a free Blender plugin that allows you to export your Blender objects as an SVG (the file format that Inkscape uses by default). I’ve discussed the usefulness of this plugin before. Lastly, install Inkscape, if you don’t have it already. I’ll be using a beta build of 0.47, which should be officially coming out within the next two weeks. If not, just grab a beta build as it’s pretty stable.

Once you’ve installed these programs open up Blender and you’ll see the cube on screen.

The cube is usually the first thing you see.

The cube is usually the first thing you see.

Depending on how best you work you may want to switch to Camera view. You can do this by either clicking on View > Camera or pressing Num0 (the 0 key on the keypad). What we now see is what the camera sees. If you were to export this as a jpg or SVG this is the angle that you’d see it from.

oooh, shiny 3D!

oooh, shiny 3D!

We need remove this cube and add a UVsphere to the screen. Right-click on the cube and press X or Del to delete it.

Bye bye cube!

Bye bye cube!

To add a UVSphere, in the main window press the Spacebar and then go to Add > Mesh > UVSphere.

Add a UVsphere

Add a UVsphere

You’ll now see another dialogue box asking you to specify the rings and segments. This is important as it’ll define how many tiles there are in your disco ball. Think of these options in this way. The segments option is like the segments of an orange and cuts through the sphere vertically. The rings option cuts through it horizontally. These diagrams might explain it better:

Segments go vertically

Segments go vertically

Rings go horizontally

Rings go horizontally

Put the two together...

Put the two together...

The default is for both to be 32, but, if you want more tiles increase the value and if you want less decrease it. Once you’ve chosen press ok and your sphere should be on screen.

UVsphere

UVsphere

You can reposition, rotate or scale your sphere if needed. To reposition it, with the sphere selected (right-click it if it isn’t selected) press the G key. This grabs the object that’s selected and allows you to move it freely. Try moving your mouse about. This can be useful, but we’re working in a 3D environment which…er.. has three dimensions that you can move along. To move it along a set axis you can either left-click the arrows coming out from the sphere or, after pressing the G key, press the key that corresponds to the axis that you want to move it along. For example, if I wanted to move the sphere along the X axis (the red line) I’d press the G key, the the X key. Now, no matter how I move the mouse the movements of the sphere are constrained to the X axis.

Similarly, to rotate the sphere press the R key and to scale it press the S key. The same rules about constraining it to a certain axis can still apply.

You can do things such as repositioning the camera other such trickery but for that you’ll need to learn more about Blender for that.

With your sphere now ready go to Render (at the top of the screen) and then press VRM.

The VRM options window

The VRM options window

I left the options as they are, but if you feel adventurous have a mess around. When you’re ready press the Render button and then choose the place on your computer to save it and what name to give it and finally press Save SVG. You’ll notice the egg timer appears in place of your mouse cursor to let you know that something’s happening but otherwise there’s a handy progress bar at the top of the screen.

Blender Screenshot

Open up the saved object in Inkscape and voila!

It's an SVG Sphere!

It's an SVG Sphere!

That’s the first part of this tutorial done! The next part draws upon some of my own experiments but is also taken from the original tutorial.

When you’ve opened up the sphere you’ll notice that it’s all one object. This is because all of the paths (the tiles) are grouped into one. You can ungroup it if you want but for this tutorial you don’t need to. Give your object a base a fill and stroke colour. You can do this using either the colour palette at the bottom of the screen or the Fill and Stroke dialogue (Object > Fill and Stroke or Ctrl + Shift + F).

Applying fill and stroke colour

Applying fill and stroke colour

The final step of this tutorial from me is the following. With the base colour selected we’re now going to randomise the colours but within that hue. To do this we’re going to use the randomise filter which is located in (in Inkscape 0.47) Extensions > Color > Randomise.

Leave the Hue option unchecked (unless you want a multicoloured sphere) and then press Apply.

Your finished disco ball!

Your finished disco ball!

There is of course more that you can do to make this disco ball look more realistic but take a look at the tutorial that inspired this one and come up with something of your own 😉

Click to download the SVG

Click to download the SVG

Starting off Simple

I’ve been doing quite a bit of messing around with Alchemy. Whilst in search of solution for a problem in Blender I came across a rather awesome time-lapse digital painting from an upcoming Blender Foundation project, Durian. Not only was I blown away by the skill of the artist but also by the software that he uses. I’m an open source nut so was really glad to see him use GIMP and other open source software to produce his piece. One particular piece of software that stood out to me was Alchemy.

If you’ve watched the video already you’ll have seen how he used that program to create chaos from which to build something else from. I was a bit skeptical at first, thinking that GIMP and Inkscape can do this already and with many more options. However, upon using it I could soon see the benefits of using this program. As the website so clearly states, it’s not meant for finished pieces (although some have used it to create finished pieces). It’s meant to help generate ideas, to sketch, to just go crazy on!

After just a week of using it this was some of the work that I had created in it

Lunchtime Butterfly (by hellocatfood) Stop Hitting Yourself (by hellocatfood)

I soon began to think more about what I perceived to be the point of the program. Typically, when I sketch my marks start off very light and whispy. Then, I draw over these whispy lines with more confidence until the original marks either become thicker and darker or are simply overshadowed by the newer marks. With practice you would then expect one to be more confident with their mark making, to the point where there are no more whispy lines, just sharp, clear marks.

Also, after many hours of studying you would expect one to make marks that represent any form in as few marks as possible. One important lesson I learnt at university is that you should only add detail where it’s needed. Spending 100 hours on an art piece may be personally satisfying but when people wont notice or have the time to appreciate that amount of detail why bother. In another situation, when you have a deadline looming, do you really have the time to add insane amounts of detail?

In time I feel I should be using this program to help develop this skill and my confidence as an artist. Drawing intricate layered pieces may look impressive but personally I know part of the reason I use that style is lack of confidence. I have put a suggestion to the developers to add a feature to Alchemy (and I’m slowing learning Java) that can help facilitate this by restricting the amount of shapes you can have on screen, but until then I’ve been doing a few tests of my own. Partly born out of frustration I’ve been trying to do portraits of myself using as few shapes as possible, in this case four shapes. As there are soooo many different recognisable features about our own individual faces it would be quite a challenge to pick just four features or shapes.

Working from memory I drew these portraits last night.

Portrait 1 Portrait 2 Portrait 3 Portrait 4 Portrait 5

On a side note the good thing about Alchemy is that it can record a snapshot of your drawing to a pdf at timed intervals. You can download a zip of all of the pdfs if you really wanna see how I did it.

Admittedly the first portrait probably has six shapes (open the pdf up in Inkscape to find out) but that was because I accidentally used a white shape on a white background. Alchemy has no undo function so I just painted over it in black.

I slept on it and came back with a few new ideas. Do you really need to draw someone’s head or hair? That depends on what their most recognisable features are. I am quite well known for my hair, but I proved last year that even without it people still knew who I was *shock*. So, maybe it’s not that important. As a test for yourself, try taking a portrait picture of yourself. Open that picture up in your favourite picture editor (I use GIMP (duh)) and apply the photocopy (or equivalent) filter. If needed erase the background until you have just your facial features.

With Hair and clothes

With Hair and clothes

Without Hair and clothes

Without Hair and clothes

Is it still recognisable?

So, I tried again to draw myself using only four shapes, but this time only my facial features. Here are my results (same four-shape rule applies).

New Portrait 1 New Portrait 1
(download zip of pdfs)

A little more recognisable? Four shapes might be a little bit too restrictive but you only really learn from challenging yourself. Why not try making the cursor invisble when you draw (press H) or draw “blind” (Affect > Draw Blind). Going back to the aims of the program once you feel more comfortable using very few shapes let yourself go a little bit and maybe use 10 or twenty shapes. Here is my final piece, starting with simple shapes, then going over with more detail

Final Portrait
(download pdf)