Pure Data Patching Circles at BOM

From 16th March to 27th April I ran a four part Pure Data Patching Circle at Birmingham Open Media. It was originally intended to be an informal gathering of Pure Data and “creative coding” enthusiasts but quickly it turned into a course in using Pure Data. Here’s some of what I learnt from running it.

Patching Circle #1

This was an almost exact replication of the beginner’s Pure Data workshops that I’ve done in the past at places such as GLI.TC/H 2012, Vivid Projects and Flip Festival. I first introduced some of the projects that I have done and then dove straight into things like installing the software on different platforms.

This part, in itself, had a couple of issues. The biggest problem is that Pure Data Extended, which is the most feature-complete version of Pure Data, is effectively dead. It hasn’t received an update in over two years and the developer seems to have abandoned any efforts to update it. Because of this I was a bit cautious in instructing people to install this software. However, after evaluating the situation, taking a chance on Pure Data Wxtended, which is still in use today despite its age, seemed a better option than downloading Pure Data Vanilla and manually compiling/installing all the necessary libraries. Maybe one day PD L2Ork will be cross platform (something which may be possible thanks to a graphical user interface (GUI) rewrite effort), and maybe the whole infrastructure of PD will become more mature. Until then, Pure Data Extended was suitable.

Following the installation the very basics were covered. I explained the difference between object boxes, GUI boxes, messages boxes etc, and how to change their properties. These are simple concepts but really important to using Pure Data. People that joined later in the patching circles still picked up a bit of this information, but spending a lot of time on it ensured they understood fully.

The workshop concluded how to use the amplitude of microphone input to control the scale of an object that had their webcam feed as a texture. Not a necessarily useful feature but a great way to introduce interactive visuals and the potential of Pure Data.

One thing I learnt from this first Patching Circle is that there isn’t a big enough community of creative coders in Birmingham and the surrounding area to support informal, peer led meetups. For that reason I devised a course plan for the following Patching Circles.


Patching Circle #2

Following feedback from the first Patching Circle I took a more structured approach to this Patching Circle. This was definitely the right approach as the topic, loading and using video, can be a difficult one to grasp and so needed a structured way to teach it. Loading videos is a surprisingly long-winded task. One point I emphasised is that in Pure Data nothing is assumed. For example, just because a [gemwin] has been created it doesn’t mean that it automatically renders its graphics. the [1( messages needs to be sent to it. Similarly, when working with video in Pure Data, even though a video is loaded it will not automatically play – that requires the [auto 1( message. Also, there is no direct function to loop a video. Instead a user would tell the [pix_film] object to go back to the first frame when it has finished playing all the frames. Yes, this is looping, but there is no simple [loop 1( message. Finally, being able to control the speed would require the user to manually advance frames and specify at what speed to advance to the next frame. This brings in the problem of knowing how many frames are in a video. A solution to this is shown below.


We concluded the patching circle by learning how to add in custom controls using the [key] command. Having GUI boxes such as [tgl], [bng] etc allows a user to interface with the patch by using their mouse. However, in a live performance being quick to react is important and that’s where the limitations of the mouse are shown. Using [key] a user can map any key on their keyboard to anything in Pure Data. For example, k could trigger the [pix_kaleidoscope] effect and pressing the arrow keys could speed up or slow down video. Doing this is simple and requires just knowing which key is represented by which number.

With all this knowledge the participants learnt how to build a very simple video mixer.


objects covered

[pix_film], [f]/[float], [key], [sel], [line], [pix_contrast], [pix_kaleidoscope] etc, [maxlib/scale], [tgl]

Patching Circle #3

Just like in typed programming languages, the appearance, layout and quality of Pure Data patches is just as important as whether it works. Similarly, learning how to reuse code makes patching more efficient and provides some future proofing. For the third patching circle I took a break from teaching interactivity to focus on creating interfaces, subpatches and abstractions.

The benefits of subpatches were quite easy to show. I gave the task to the participants to encapsulate all of the objects that they used to make a video player into one subpatch that they could easily reuse.

Moving on from this I asked them to build a single-button interface for it that would simply load a video and automatically play and loop it. Creating an interface for a patch is useful for two reasons: It allows you to easily navigate you patch and it can provide valuable feedback on what is happening. Unfortunately, using Graph-on-Parent and [canvas] objects to create interfaces is a somewhat tricky.


The red box that shows what will be shown on the parent patch is not easily configurable. Yes, you can specify its dimensions and position, but being able to do it using resize handles would make this process a lot easier. The same applies to [canvas] objects. What we found is that even if an object is just a few pixels over the red line it will not show in the parent patch. Finally, and perhaps most annoyingly, the Z order of the objects cannot be changed. Instead, this is determined upon creation of the object, meaning if a user wants to have a [canvas] object behind their objects they either have to create it before everything else or cut and paste everything so that it’s restacked. Yes, quite annoying.


Objects covered

[pd], [inlet], [outlet], [inlet~], [outlet~], comment, [$0]

Patching Circle #4

So far I had covered everthing that most regular VJ software can already do: play video files and add effects to them. Although not alone in this feature, Pure Data allows you to create complex patterns from its array of simple 3D shapes or your own models. By learning how to use [repeat] you can turn a simple [cube] object into an array of cubes that dance around. The last Patching Cirlce was perhaps the most difficult, even for myself, but I felt it shows best what Pure Data is capable of.

To explain how the [repeat] object works I showed the participants the Magnetophon video I made with Axel Debeul from databit.me in 2013

Despite their being an array of cubes on screen only one [cube] object is used. I [repeat]ed the [cube] a number of times, [translateXYZ]‘d it along the X axes and [rotateXYZ]‘d it then [repeat]ed it some more and [translateXYZ]‘d it along the Y axis. Doesn’t make sense? Perhaps this patch will help:


What I had trouble explaining was how the [separator] object worked. My understanding is that it is similar to pushMatrix and popMatrix from Processing. Perhaps it is, and perhaps I still don’t fully understand how it works yet, but it didn’t work as I expected it to. Nonetheless, I gave the participants the task of recreating the stack of cubes and most of them succeeded. Even those that didn’t made some really interesting patterns.

Pure Data Patching Circle

Objects covered

[repeat], [draw (, [model], [multimodel], [separator]


Teaching a four-part course was an eye opener for me. It showed me that to really learn Pure Data you ned more than an introductory session. It also emphasised to me that face0to-face tuition is really beneficial to some people and probably would have helped me learn better in my early days of using Pure Data. Of course, if you want me to lead a beginner’s session or a more advance one just get in touch.

Creative Code #1: Pure Data Patching Circle

On various dates in March and April 2015 I’ll be running a Pure Data Patch Circle as part of Birmingham Open Media’s Creative Code series.


A new ‘Patching Circle’ on creative code designed and run by artists, for artists. A Patching Circle is an informal gathering of anyone who is interested in patching languages such as Pure Data, Max/MSP, vvvv, and Quartz Composer.

As well as covering the basics of Pure Data, we’ll also offer peer-to-peer support for more experienced users and help with specific projects.

Artist Antonio Roberts leads the Creative Code #1 series on Pure Data. Absolute beginners and experienced patchers are welcome. The event is free and open to everyone – work on personal or professional projects, school work, or just patch quietly to yourself in a room full of other people patching patches and helping other people patch.

The patching circle is free and runs from 18:00-21:00 on the following dates:

There might also be pizza!

Gifs in Pure Data

Every so often on my travels across the information superhighway I come across a Pure Data user asking if animated gif files can be read in Pure Data. Technically speaking they have always been able to be read in Pure Data, but not always in a way that a user usually wants. Using the [pix_image] object a user can read almost any image file format. On Linux this is dependent on ImageMagick, so whatever it can read can (theorectically) be displayed in Pure Data/GEM. The problem arises because [pix_image] doesn’t display animated gifs as animations, only the first frame.

There are several solutions to this problem. For these examples I’m going to use the following two gifs:



Click through each image to get the full-sized original versions.


If you separate the gif into its individual frames you can use [pix_multiimage] to display each frame in succession.

Click to download the PD patch.


The benefits of using [pix_multiimage] to simulate an animated gif are that you can display high quality images with an alpha channel at whatever frame rate you choose. Simulating stutter effects or reversing is as easy as using a [counter] or random number generator.


The problems with this approach are that [pix_multiimage] needs to be told how many frames to cycle through, and not all gif animations have the same amount of frames. [pix_image] and even [pix_data] do not report information about the amount of frames in an animation, so that value cannot be passed to [pix_multiimage]. Assuming that you separate your gifs to their individual frames, an abstraction can be built that can detect how many images there are in a directory and then send that value to [pix_multiimage] but that is a lot of effort to go through!

Convert gif to video

The technique that perhaps most PD users have used is to convert the gif into a video file and use [pix_film] to play it. I used the following script to convert a folder full of gifs into mp4 files, with all transparent pixels converted to green pixels:

With the gif now converted to a video you can use [pix_film] to play a video as you normally would.

Click to download the PD patch.


So far I have only tested playing animated gifs in Pure Data using Gmerlin on Ubuntu. Without knowing if the same would work on Windows or Mac OSX, using video files is the safest option for all users.


Any sort of file conversion will reduce the quality of the output, and this method is no exception. The videos aren’t very sharp, especially at the borders of the green pixels.

Making the green pixels transparent using [pix_chromakey] or [pix_alpha] requires fine-tuning to ensure that other colours aren’t made transparent. This isn’t always 100% reliable and can have a few glitchy artifacts.

Using gifs directly with [pix_film]

Another approach is to use [pix_film]. “Hold on” I hear you say, “[pix_film] can only be used to play films! How dare you suggest that it can be used to play image file formats. Balderdash!”. Well, don’t beleive the hype! As a Linux user, I can only comment on this working on Linux. If anyone can get the following methods to work in any other OS please get in touch and I’ll add it here.

When you play media file formats in Pure Data on Linux you’re actually using external programs and libraries to play them. So, you’ll use ffmpeg/libav to play videos and Imagemagick to display images. There’s also another program you can use, Gmerlin. Install it by executing sudo apt-get install gmerlin. Pure Data/GEM has some weird behaviour whereby the delay amount of a gif needs to be explicitly set to a value 1 or above in order for an animated gif to be played. This can be achieved on a folder full of gifs by executing mogrify -delay 1 *.gif.

And now you can easily open an animated gif in Pure Data the same way you would a video file.

Click to download the PD patch.


Gifs, unlike (most) video file formats can have an alpha channel. Another benefit is that you don’t need to deal with converting files. No longer will you have to worry about whether an mp4 is faster or more effecient than an mp4, or what codec to use. Gifs will just be gifs.


If the original format of your source file is a gif, then perhaps it is more efficient to keep it as a gif. If it was a video file, would it be beneficial to convert it to a gif? Not always. Even if you could achieve a smaller file size or have PD use less processor power by using a gif, the quality of the video output would be reduced due to gifs only allowing 256 colours.

It’s pronounced “gifs”

There are perhaps other benefits and drawbacks to each approach that I haven’t written about or haven’t even thought about. One such example of both is processor usage of each method. I suspect using gifs is actually less efficient, but I don’t have a good method of testing this. Perhaps one of y’all could!


Pixel Player

Back in June 2014 I wrote how that in 2013, after visiting The Cyborg Foundation in Barcelona, I became interested in exploring sonification. My experients at that stage culminated in the production of the Pixel Waves Pure Data patch, which allows the sonification of images based on the colour/RGB values of individual ppixels.

I spent the following months building and refining an update to the Pixel Waves software, with a focus on allowing multiple images to be played simultaneously. In a way, I wanted to create a sequencer but for images. After many months I’m happy to formally announce the release of the Pixel Player.


This software operates in a similar way to Pixel Waves, but with a focus on playing multiple images simultaneiously. Instructions on getting started:

  • Create the GEM window
  • Click on the red button to load an image. Supported file types depend on your operating system, but generally jpg, gif and png file formats are supported
  • Click on the green start button and the pixels will start to be read
  • Drag the orange horizontal slider up to increase the master volume
  • Drag the orange vertical slider up on each pixel player to control its volume
  • Turn the knob to scale the pitch of the audio

The currently displayed/sonified pixel for each channel will be synchronised from the first channel. For this reason it is recommended that all of the input images used are the same dimensions.

This may sound like a lot to do but it becomes easy after a few attempts. To make things easier the loadimage.pd patch has inlets that you can use to control each channel with a midi controller, keyboard, or any other device. To expose the inlets increase the canvas size of the patch by around 10 pixels.

The software includes a video display output, which shows the current pixel colour. This can also be shown on the patch window by clicking the red display button. Flashing lights might not be to everyone’s taste, so this can be turned off. Due to this patch relying on [pix_data], the GEM window needs to be created, even if the pixel display isn’t used.

Enough yapping, what does it actually sound like?! Here’s a small demo, made using a combination of 40×20 images made in Inkscape and images modified using the Combine script by James Allen Munsch (made for Archive Remix. Remember that project?).

Please do give the patch a try and let me know what you think!

An Introduction to Pure Data – 5th August

On August 5th from 13:00-17:00 I’ll be running a Pure Data workshop at Vivid Projects.


Pure Data (aka Pd) is an open source a Visual Programming Language, similar to the likes of vvvv and Max/MSP. Pd enables musicians, visual artists, performers, researchers, and developers to create software graphically, without writing lines of code. Pd has seen various implementations including live visuals (VJing), electronic music and even as an embedded library on websites, games or a Raspberry Pi.

This 4-hour workshop looks at using Pd for generative and interactive visuals. Participants will learn how to use Pd to create dynamic motion graphics that react and interact with a variety of sources including audio, live data feeds and hardware controllers.


  • The workshop will take place at Vivid Projects, which is located at 16 Minerva Works, 158 Fazeley Street, Birmingham, B5 5RS. Ring the buzzer to be let in.
  • The workshop takes place on Tuesday 5th August and will begin at 13:00 and finish at 17:00.
  • The workshop is limited to 15 places
  • Tickets cost £10 (plus booking fee) and can be purchased from http://www.eventbrite.co.uk/e/pure-data-workshop-tickets-12365061231

What you will need

  • Pure Data Extended. Pure Data is free and open source software that can be downloaded for Max OSX, Windows and Linux from http://puredata.info. The software comes in two versions, Pure Data Vanilla and Pure Data Extended. For this workshop participants should download Pure Data Extended before the workshop begins in order to have more time developing skills and software.
  • A laptop. Although Pd can run on old hardware, newer hardware results in a smoother usage experience.
  • Ideas! After the basics have been covered the workshop will focus on developing any ideas you have.

Thoughts on live coding visuals in Pure Data

I took part in Algorave in Gateshead on 26th April. Apart from being incredibly awesome it was my first time live coding – or rather live patching – visuals in Pure Data from scratch. I emphasise from scratch because nearly all of my performances involve me modifying patches, but never starting with a completely blank canvas. I also occasionally used the HSS3jb as a texture for objects, but never on its own. It’s also great for when crashes occur, which is/was often ;-). Here’s a few sampels of my visuals. Videos by Mariam Rezaei:

I learnt a few things about Pure Data that night, and my general opinion is that it isn’t that great as a live coding visuals tool.

One of first issues is encapsulation of objects. This can be done quite easily but it’s a manual process which would involve cutting all cords and reconstructuring a patch. That is, you would have to cut the selection of objects, paste them into a sub patch and then reattatch it. By way of comparison, Max/MSP has this as a feature already, whereas this isn’t mentioned at all on the bug tracker Feature request is now on the bug tracker. Not being able to auto encapsulate objects makes reuse a bit more difficult and cumbersome, which resulted in some really messy patches from me on the night.

Algorave patches

This also relates to another issue of object insertion. When I was building my patches I would often have to preempt what I would need. I nearly always started with [gemhead]-[translateXYZ]-[rotateXYZ]-[repeat 10]-[rotateXYZ]-[translateXYZ]-[color]-[cube]. Inserting any additional objects required me to cut the cord and therefore the screen output. This would be solved if there were, for example, a method whereby if two objects were selected, the next object was inserted in between them. This is obviously an over-simplified specific use case which would need more thought behind it. Again, no mention of it on the bug tracker. Feature request is now on the bug tracker.

There were other thoughts I had on the night, such as the incosistencies and clumsiness of using the [repeat] object, the lack of a snap-to-grid option for aligning objects, the tiny size of inlets and outlets – even when the ojbects themselves may be huge, which is only exaggerated when using a 13″ 1080p screen, and the lack of a toolbar (yes, I am aware of GUI plugins), but these are the two which I felt would’ve helped me most.

Has much else been written about the use of Pure Data for live coding visuals?


I’m happy to announce the release of NeonPlastic, a generative Pure Data artware piece by myself (visuals) and Joe Newlin (audio), inspired by Neoplasticism and all things boxy.

The above video acts only as a preview. To experience this piece in all its hdmegaawesomeness grab the code, open Pure Data, get yourself a cup of tea and press the big red button!





Pure Data File Killer

On one of my frequent journeys on the information superhighway I stumbled across Little-Scale’s Mass JPG Killer. This handy little patch allows a user to load any binary file and “glitch” it by overwriting some of the original data with a repeating pattern of user-defined data.

Mass JPG Killer by Little-Scale

The only problem (for me and people like me) is that I don’t have Max/MSP and can’t install it on Linux, meaning I’ve never actually used it!

Little-Scale very kindly provided the internet at large with screenshots of the inner workings of the patch. I was able to to use a whole lot of science and maths to rewrite and reinterpret this patch of mass destruction in Pure Data, which is more easily available.

Pure Data File Killer

Click to download

Click to download


Usage of the patch is very simple and can yield some quite interesting results!

  • Click open to load a binary file. Pure Data may freeze for a moment if you’re loading in a large file. I don’t recommend loading in a file over 100MB
  • Set the byte offset. This number represents the starting point at which the patch will start “corrupting” the file. If you’re a glitchspert (glitch + expert) you’ll remember that you should avoid modifying the header. To avoid modifying the header set the offset to the 1000s.
  • Set the period value. This can be hard to understand, so here’s an example: If the period is set 1378 then at intervals of 1378 bytes from the offset it will modify the data.
  • Set the data value. This works in conjunction with the period value. Using the previous example, if the data is set to 102 then at intervals of 1378 bytes it will replace the current byte value with 102.
  • Press either random period or random byte data to populate these values with random values.
  • Press glitch it!. Guess what that does.
  • Write the files to save them to the same directory as the source file. The original file will not be overwritten.
  • To start again press the reset button. It will load the original byte data.

This patch is very similar to Little-Scale’s with a couple of exceptions:

  • The offset cannot be set for each instance. This is by design as I felt it was a bit redundant.
  • You no longer need to copy the hex data to a new file in order to view the results
  • It’ll work on any platform that can run a full version Pure Data Extended. This should include the Raspberry Pi version as GEM is not required.


Although it was originally inspired by the JPG Killer you can get some very interesting results if you use other file formats and set the period data to a number less than 20.

Pure Data File Killer - Bliss (sgi)

Pure Data File Killer - Bliss (jpg)

Pure Data File Killer - Bliss (pix)

Pure Data File Killer - Bliss (pix)

Multimedia Programming with Pure Data

A new book by Bryan Chung, Multimedia Programming with Pure Data was recently published by Packt Publishing.

Multimedia Programming with Pure Data

Multimedia Programming with Pure Data

Multimedia Programming with Pure Data

Despite it being a big part of Pure Data Extended, GEM – and making visuals in PD – doesn’t get as much attention as audio processing. Whereas sound-makers have resources such as Loadbang and excellent tutorials from Obiwannabe, visual artists have little access to such a comprehensive resource, which can be a bit off-putting for new users. With that in mind I was more than happy to be a reviewer for this book that focuses almost entirely on GEM and making visuals in PD.

Although it is definitely suited to new users this book does get quite complex in later chapters where it begins to detail camera tracking, OpenCV and particle generators. I even learnt a couple of things!

Most of the tutorials are written to work on all operating systems (Linux, Mac and Windows) though some instructions, such as installing libraries, aren’t always covered. That could be another book in itself!

Get yourself a copy now!

Pure Data Play

Thank you to everyone that attended the Pure Data Play workshop on 2nd November as part of Flip Festival. In the space of two hours the participants went from knowing nothing about Pure Data to manipulating 3D objects on screen, playing videos and webcam streams and controlling their videos using user-defined keyboard shortcuts. Some images of the patches:

Pure Data Play - Alex Jolliffe

Patch by Alex Jolliffe

Pure Data Play - Eliza Marcu

Patch by Eliza Marcu

Pure Data Play - Jamie Boulton

Patch by Jamie Boulton

To those of us who know more about programming and using Pure Data these patches may seem simple, but hopefully from this tutorial the participants have gained an insight into what is possible using Pure Data.

What I like most about Pure Data is that it is very extensible. It can accept data from a wide range of sources – including Arduino boards, game controllers (including Wii remotes and Kinect controllers), microphones, lists of data and even raw binary data – manipulate it and give audio or visual feedback. Best of all it does it in a way that is very logical. Some people may prefer to write lines of data, but with Pure Data (and other dataflow languages) you can visually see how data flows and is manipulated.

If you’re interested in attending or booking me for a Pure Data tutorial get in touch!