The Stay at Home Residency – part 2

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

In the first blog post I looked at my influences and research carried out before I started making work. In this second blog post I’ll be showing some of the filming I did.

With the research conducted and panic now over I started filming again. I began by filming various things in my home. I tried to focus on shots that would have some movement in them, even if it were only background movement. Because of this most of my shots look out of a window. Although the background is blurred whatever movement there is – be it the trees, people, or lights turning on/off – makes the still shot that little bit more interesting.

Next, I decided to bring out my projector and see what I could do with it. By now my projector is at least seven years old (I originally purchased it for a BYOB event in 2013) and so not only is the projection quality quite poor, there are glitchy lines running through the the projection.

I had thought about making animations to project onto various objects, but I didn’t want to turn this into an animation project. I’ve long used my Glass video when experimenting with projections and I liked how to made any surface it landed on just way more interesting. To replicate this saturation of glitchy colour and movement I installed a copy of waaave_pool onto a Raspberry Pi, connected a webcam to it and pointed the webcam at random surfaces in the room.

Video waves itself is a bit like a video synthesiser, working primarily with webcam video input. With that installed I made some things like this:

I liked these projection experiments most when they were really subtle. I didn’t want the projection to overpower the surface and render it invisible or irrelevant. For example, in one experiment I projected onto cushions, which looked really great but the cushions got lost behind the projections.

I also played with a strip of LED lights I had from a previous project. They can be programmed to to flash quickly but they seemed to work best when they were pulsating slowly, which very much matched the pace of the shots I had filmed so far.

In the next blog post I’ll be detailing how I made sounds for the film and sharing the finished film.

The Stay at Home Residency – part 1

From 1st – 29th July I was happy to be selected as an artist in residence for The New Art Gallery Walsall’s Stay at Home Residencies.

The New Art Gallery Walsall has adapted its Studio residency programme in the wake of the Coronavirus pandemic to support three artists based in the West Midlands to produce work from their homes between May and July this year.

Following an open-call to artists based in the West Midlands, the Gallery received 60 varied proposals from a diverse range of artists working across the region. The many challenges that artists are facing during lockdown were well articulated. In selecting, we were keen to see opportunities for artistic and professional development during these challenging times, to support creative approaches to practice amid imposed restrictions and to explore the benefits and possibilities of sharing with an online audience.

It’s been some months since the residency ended and I really learned a lot. In this three-part blog post series I’ll be talking a bit about the month of learning and creating, the struggles I had, what I did to overcome them, and some of my thoughts on the final outcome. In this first blog post I’ll be going over my research and influences.

My reason for doing the residency was to explore ways of making work without a computer. Quoting from my application:

Creating my digital video works is a very stationary process, requiring me to spend long hours sat in my home office at my desk on the computer. I have long had a desire to step away from the desk and learn film and sound production techniques. I already own much of the required equipment including a DSLR camera, microphone and tripod. I have mainly used these to document events or exhibitions.

This residency would grant me the opportunity to step into learning film production techniques. I will study available materials (digital books and tutorial videos) and implement what I learn when creating the films.

Looking back over the last 10 years of my practice I have noticed that most of my work has been computer generated videos and animation.

Loud Tate: Code

Most of these works are generative and, much like animated gifs, they don’t have an extensive narrative and are best viewed on repeat. This isn’t a downside to the works, but making something with a narrative using filmed footage was definitely of interest to me for this residency.

I began the residency exploring the technical processes involved in film making. I have used cameras for a long time but often I don’t explore their full capabilities. I usually just leave the settings on Auto and most of the time it works out fine! This is similar for lenses. The camera I owned at the time of the residency was a Olympus Pen F together with a 45mm and 17mm lenses. I only ever really understood that the former is good for portraits and the latter for landscapes/outdoor but still didn’t understand why.

I wanted to understand this and more so spent a lot of time watching videos and reading tutorials. Two really interesting videos were The Changing Shape of Cinema: The History of Aspect Ratio and The Properties of Camera Lenses from Filmmaker IQ.

These two videos, and the many others I watched late one evening, went into far more detail than I needed about film, the history of cinema, and equipment. I also didn’t own 99% of the equipment and resources the videos mention, but it was really interesting to know how all those things go into making a film and achieving a certain cinematic look.

The next set of videos that was really insightful was the Crash Course Film Production series of videos. The Filmmaker IQ videos focused on specific details about film making whereas these videos were perhaps more relevant to me as they were produced from the viewpoint of someone with no knowledge wanting to know what goes into making a film. The third video in particular, The Filmmaker’s Army,is particularly enlightening as it explains a lot of the roles in a film production and how each work together to make a finished film.

One of the main things I took from watching this series of videos is that there is a lot of planning that goes into a film. Depending on the scale of the project the time between writing a script and filming can be years! And when on a film set a lot of the roles are there to ensure each person is doing the correct things at the right time.

Although all of this was really exciting and inspiring to learn at the beginning of the residency there was one big problem: Almost all of it would not be applicable to me at this time. Quoting my application:

Using tools and materials I have in my home – which include programmable lights, a projector, screens, and other electronics – I want to create a series of short abstract films that explore the use digital art, light, and projection to illuminate my home and immediate surroundings. The everyday objects in the home, the grass outside, the brickwork and more will act as both creative material and canvas for abstract projections.

I was strict in my desire to create a film only within the home. This meant that I couldn’t acquire stage lights, microphones or other equipment. I had to use whatever I had in whatever filming conditions I was given. Still, These restrictions could hopefully provide inspiration.

Early on I struggled to make anything interesting. I filmed whatever I could find in my home but it was all very static and at times boring. It was then that I realised that the domestic environment, especially during lockdown, is a pretty boring place! In my household there are only two people and the environment doesn’t change that much. It’s not like the outdoors where the environment changes, or like a gallery space which can reconfigured and has access to lots of equipment. In short, everything is just static. I was very worried that whatever I made would be very boring to watch.

I started to look to other films and artists for inspiration. I was browsing Mubi one day and saw a movie called Villa Empain by Katharina Kastner. I had no idea what it was about at the time but it was short and gave me a distraction from the panicking!

It turned out to be exactly the kind of film I needed to see. To me it was a series of animated portraits of the Villa Empain building. A lot of the shots in the film were static, featuring minimal movement from the pool water, trees, or sun shining through the stained glass windows. It was quite a meditative film. It helped to show me that a film didn’t need to be action packed to be interesting.

I also remembered the work of Rodell Warner (having first seen their work in 2019 at bcc:). In his Augmented Archive series he’ll take an archive picture, add a drone soundtrack to it and animate it using a flickering effect (plus his own 3D sculptures). Of course there is a much deeper concept than my very technical description (and you should see more of his work to understand), but seeing his work showed me that there are ways to add depth and movement to static imagery.

In the next blog post I’ll be detailing the process of filming shots.

10 years since GLI.TC/H

Exactly 10 years ago the first GLI.TC/H was starting in Chicago, IL. Attending that festival was turning point in my practice and, the more a reflect on it, an important part of my personal life. Here I want to reflect on that a bit.

GLI.TC/H is an international gathering of noise & new media practitioners in Chicago from September 29 thru October 03, 2010!

GLI.TC/H features: realtime audio & video performances with artists who misuse and abuse hardware and software; run-time video screenings of corrupt data, decayed media, and destroyed files; workshops and skill-share-sessions highlighting the wrong way to use and build tools; a gallery show examining glitches as processes, systems, and objects; all in the context of ongoing dialogues that have been fostered by experimentation, research, and play. GLI.TC/H is a physical and virtual assembly which stands testament to the energy surrounding these conversations.

Projects take the form of: artware, videos, games, films, tapes, code, interventions, prints, plugins, screen-captures, systems, websites, installations, texts, tools, lectures, essays, code, articles, & hypermedia.

In 2010 I was definitely in a much different place than I am now. I was three years out of university, living in Birmingham and struggling to find my place as an artist. What I was missing, besides paid artistic opportunities, was a community of like-minded people. My life wasn’t completely devoid of artistic activities: I had connected with Constant in Brussels, Belgium and took part in several of their activities; I had started fizzPOP with Nikki Pugh, which opened my eyes to what was possible with technology on a technical level; Being part of/around A.A.S Group taught me a lot about collective noise and art making; BiLE got me thinking about live performance and was my introduction to live visuals. Still, I was looking for more places I could get creative with technology and meet artists using technology. At the time I believed I said I was looking for “software artists”.

HLLEO

Discovering glitch art in 2009 certainly set me on a path to finding that community. From the early days of reading stAllio’s databending tutorials I found myself engrossed in all that it could offer, and it offered quite a lot! The glitch artists freely shared their techniques, code, theories and thoughts on glitch and glitch art. It was really refreshing to see people being so open, especially having come out of universities where knowledge is a luxury accessible only to those with money or willing to accrue debt. Even post university I was put off by tutorials and exhibiting opportunities that were behind paywalls or “pro” subscription models. I would eventually join this sharing with when I documented how to Databend using Audacity.

Gabe, Abbey, L and me

Anyone who knew me at that time would tell you how much glitch art excited me! It was the perfect combination of art, programming and creative exploration. The added randomness inherent in glitch art practices just adds further to the intrigue.

When the announcement of the GLI.TC/H event dropped in my inbox I was really excited! Having my I Am Sitting in A Room video exhibiting there was exciting in itself but what I looked forward to most was meeting all of the people behind the user names whose work I admired. The e-mail communications have long since been deleted, but in that short period between 2009 to mid 2010 I think I had already started dialogues with artists such as Rosa Menkman and Nick Briz and so being able to be around them (and other glitch artists) and exchange knowledge and skills IRL was cool!

I hopped on a plane (the plane ticket being gifted to me as a birthday present) and a short 9 hours later I landed at Chicago O’Hare in the early morning and was greeted at the airport by a smiling Nick Briz. I arrived a couple of days before GLI.TC/H started and so I spent my time meeting other artists, staff and students at SAIC (such as Jon Cates and Jon Satrom, and helped everyone at the venues to get the exhibitions ready.

GLI.TC/H

I immediately felt like I had found the community I was looking for. Everyone I met was so welcoming and friendly. It definitely helps that we were all there because of our shared interest in glitches, but even without this uniting factor everyone was approachable and made the most of the fact we were there in the same place IRL.

GLI.TC/H dinner

The days and events tat followed was, well, probably one of the best weeks I had of that time. Lots of parties, exhibitions, lectures, presentations, beers, and the biggest pizza I ever had!

I made a very glitchy video diary of my time there:

Arriving back in Birmingham I was fully inspired! I had a glimpse of the kinda of community I wanted to see and so put everything into bringing that same spirit and approach to digital art to Birmingham. In the following year I was a guest curator for GLI.TC/H in Birmingham at VIVID. This started my relationship with VIVID (and later Vivid Projects), which carried on for many years and gave me the opportunity to organise more experimental digital art things such as BYOB, Stealth, No Copyright Infringement Intended, and the various exhibitions at Black Hole Club.

Going to GLI.TC/H really benefited my confidence as an artist. It came at a time where I was struggling a lot but being around a community of friendly people showed me that there was a place – both and offline – for the weird glitchy stuff that I wanted to make!

I’ve been following the practices of many of the people I met and it’s been inspiring watching them develop and see how, or even if, glitch art continues to be a part of it. Personally glitch art still is a part of my practice but more as a tool and method rather than the conceptual focus.

I’ll wrap up now and say that GLI.TC/H was great! Thanks to the GLI.TC/H Bots for making it happen.

Typewriter Text Revisited Revisited

This ongoing adventure to create a typewriter text effect has had a lot of twists and turns over the years. Back in 2011 I used Pure Data to achieve this effect. Fast forward to 2019 and I experimented with Kdenlive and Natron before settling on Animation Nodes. In April 2020 update on this I detailed how I used Animation Nodes and attempted to use Aegisub to create this effect. Around the same time I had started experimenting with expressions in Natron to achieve the same effect.

The value of a parameter can be set by Python expressions. An expression is a line of code that can either reference the value of other parameters or apply mathematical functions to the current value.

The expression will be executed every times the value of the parameter is fetched from a call to getValue(dimension) or get().

In theory with Natron expressions I could created a counter that would increment on every frame and type words out character by character. Y’know, like a typewriter. I’m forever learning Python so after a lot of effort, and a lot of help from people on the Natron forum I came up with the following solution. In the Text node I entered the following expression:

originalText = original.text.get()
output = " "
ptr = 0
slowFac = 4
for i in range(frame/slowFac, len(originalText)+1):
	if frame/slowFac < len(originalText):
		ptr=frame/slowFac
	else:
		ptr=len(originalText)
ret = originalText[0:ptr]

A fellow Natron user greatly simplified the code and presented the following solution:

text = Text1.text.get()
ret = text[:frame-1]

Success! I used this in the last video for Design Yourself:

The typewriter text effect starts from 01:04. The same Natron user also posted an alternative solution.

I noticed a bug which meant that I couldn’t change the speed that the letters typed out at. One method of speeding up the text would be to use ret = text[:frame*2-1] or a different multiplier. However, I wanted something a little bit more precise, so I thought about using the Retime node. Unfortunately there was a bug which prevented this. The workaround of using a Constant node worked. In the end it got fixed, but not in time for making that Design Yourself video.

In June I was asked if I could make an intro video for Network music Festival. The organisers wanted around 10 slides of text to appear throughout the video. Some had only several words on them but some had large blocks of text.

I already decided that I wanted to use the typewriter text effect to make the text appear and then to hold that text for a couple of seconds. This presented an interesting problem. Without a Retime node the text appears one character per frame. With a large block of text 250 characters in length (including spaces) this would take, well, 240 frames to appear, which at 24 fps would be 10 seconds. The organisers wanted the video to be about a minute long, so having one slide take up 10 seconds would be far too long.

What I needed was a method for making an arbitrary amount of text to appear within a specific time/frame count. My final Natron expression (after a bit of bug fixing) looked like this.

text = Source.text.get()
letter= 0

# what frame to start triggering the write-on effect
trigger = 15

# how many frames it'll take to write the full text
length = 46

# map values. Taken from herehttps://stackoverflow.com/a/1969274
def translate(value, leftMin, leftMax, rightMin, rightMax):
    # Figure out how 'wide' each range is
    leftSpan = leftMax - leftMin
    rightSpan = rightMax - rightMin

    # Convert the left range into a 0-1 range (float)
    valueScaled = float(value - leftMin) / float(leftSpan)

    # Convert the 0-1 range into a value in the right range.
    return rightMin + (valueScaled * rightSpan)


if  frame >= trigger:
	letter = int(ceil(translate(frame-trigger, 1, length, 1, len(text))))
else:
	letter = 0

ret= text[:letter]

This expression does several things. It first allows a user to specify at which frame the text will appear (trigger). Then, no matter how much input text there is it will be mapped to the length value. Oddly Python doesn’t have a built in mapping function so I had to use the one from here. Unfortunately it doesn’t work as expected if your Text node has keyframed text changes. So, for that you’ll have to have multiple Text nodes. Here’s the finished Network Music Festival video.

I Am Sitting in a Room Revisited

My remake of the Alvin Lucier artwork I Am Sitting in a Room was released 10 years ago today. To mark this occasion I want to revisit how I made it and troubles I had remaking it for 2019.

In November 2019 I Am Sitting in a Room was exhibited at Gamerz Festival in Aix-en-Provence in France. It was a pretty nice surprise to be invited to exhibit that piece, especially as it was made in 2010 and hasn’t been exhibited since aroudn 2015.

The festival’s organiser, Quentin, asked if I could remake the film in HD resolution, which is 1920×1080/16:9 aspect ratio. The original video was made in the very odd resolution of 1440×1152, which is 4:3 aspect ratio. In any normal situation using filmed footage, this could be an impossible task. If you wanted to convert 4:3 to 16:9 you’d either have to stretch the footage or crop it and lose information. See these examples below:

Crop to fit

Stretch to fit

For I Am Sitting in a Room in theory this isn’t a problem as the piece is black text against a white background. So, I would just need to make the background larger and center the text.

Before going into the problems I faced remaking the piece in 16:9 it’s worth going over how it was made. Back in 2010 my programming skills were, to be blunt, crap. Making this piece required me to learn a lot about Linux, loops, automation and bash programming. Also back in 2010 I wasn’t that good at documenting my processes and so looking back at my source files some of them were not in a good state or were just missing. So, part of the following is pieced together from memory and what remains on my hard drives.

In 2009 I wrote about “glitching” SVG files and became really interested in that as a format to work with. Even today I like working with SVGs as they’re highly editable and can be used in many ways (vinyl cutting, plotting, laser printing, screen printing, web design etc).

If you’ve ever done any glitching via the command line you’ll know that you can utilise sed to automate the glitching of files. sed didn’t quite work on SVGs as it would always destroy them and so I asked Garry Bulmer to write a script to glitch SVG files. It worked great and left the SVGs in an editable state.

At some point later it was brought to my attention that font glyphs are formed in the same way as vector files in that the glyphs are made up of paths and nodes, hence why they are infinitely scalable. SVG fonts was/is a specification for, well, representing fonts as an SVG file. So, if I could convert a font to SVG then in theory I could glitch it!

First I needed a font to work with. The very popular Ubuntu font wasn’t out at that point and so I opted to use Liberation Sans. It’s a free font and had a lot of glyphs.

Using FontForge I converted the font to an SVG font (File > Generate Fonts). I then used the glitch script from Garry Bulmer in a way like so:

#!/bin/bash
no=1
while [ $no -le 1001 ]
do
echo $no
./glitch 0.1 < font.svg > font.svg
cp font dataface_$no.svg
no=`expr $no + 1`
done

Then I had a folder of 1002 SVG fonts. I needed to turn them back to ttf files and so used the follwing font forge scrpt to convert them back to ttfs

#!/usr/local/bin/fontforge
Open($1)
Generate($1:r + ".ttf")

And ran it over the whole folder with this cript

#!/bin/bash
no=1
while [ $no -le 1001 ]
do
echo $no
fontforge -script convert_font_fontforge.pe dataface_$no.svg
no=`expr $no + 1`
done

Finally, I generated the frames used for the video by creating an SVG file with the text in that uses the Liberation Sans font. For each frame I swapped the font file with the next one which was a bit glitchier:

#!/bin/sh
no=1
while [ $no -le 1001 ]
do
echo $no
sudo rm /home/hellocatfood/.fonts/*.ttf
cp /home/hellocatfood/Desktop/dataface_$no.ttf /home/hellocatfood/.fonts/
convert /home/hellocatfood/Desktop/glitch.svg /home/hellocatfood/Desktop/glitch_$no.jpg
no=`expr $no + 1`
done

ta da! Not a very elegant soluion but it was 2010. And it worked! The result you already saw at the beginning of this blog post. So, to make a HD version of it, in theory all I need to do was create an SVG file that was 16:9/1920×1080 and repeat this process. Here’s the result of that process:

Yikes! Not at all like the original.

You may be asking why I don’t just convert the text to paths (Ctrl + Shift + C in Inkscape). That would do away with the need to generate 1002 font files. Here’s what that would have looked like:

However, it differs from the original in one unique way. Once the glitches start there’s a lot of random question marks that appear and float down he screen. I don’t know why this happen but I suspect that Imagemagick doesn’t know how to interpret the glitched glyphs and so produces and error.

There are also some frames which fail to render the font at all and in a couple of instances render a serif font!

The results in the original are not consistent, but the piece is an exploration of glitches and so whatever glitch was produced is whatever glitch I used. So there.

Therefore, any remake which didn’t look like the original wouldn’t be faithful to the original, and I simply didn’t want to exhibit a compromised video.

One promising solution which – TL;DR – didn’t work was to try and replicate the environment I made the original work in. That is, to try and replicate an Ubuntu computer from 2010. I booted up Ubuntu 10.10 on a virtual machine (using Virtual Box) and got to work.

Ubuntu 10.10

Imagemagick was failing to read the font files and so I used GIMP on the command line to read and convert the SVG to pngs.

gimp -n -i -b - <<EOF
(let* ( (file's (cadr (file-glob "sitting_in_a_room.svg" 1))) (filename "") (image 0) (layer 0) )
(while (pair? file's)
(set! image (car (gimp-file-load RUN-NONINTERACTIVE (car file's) (car file's))))
(set! layer (car (gimp-image-merge-visible-layers image CLIP-TO-IMAGE)))
(set! filename (string-append (substring (car file's) 0 (- (string-length (car file's)) 4)) ".png"))
(gimp-file-save RUN-NONINTERACTIVE image layer filename filename)
(gimp-image-delete image)
(set! file's (cdr file's))
)
(gimp-quit 0)
)
EOF

cp sitting_in_a_room.png images/glitch_$no.png

Which produced this result:

This method shows error characters but not the same question mark error characters that were in the original. No matter what I tried it just wasn’t working.

So, in the end unfortunately I admitted defeat and exhibited the original 4:3 video. It would be easy to blame the 2010 version of myself for not creating an air-gapped iso of Ubuntu or for not properly documenting my processes, but how was I to know that I would be revisiting the piece 10 years later! Heck, in 2010 HD resolution was still not widespread, so I was really just working with what I knew.

And really, this highlights the volatility of glitches.

OCR to Text to Speech

For the sixth video in the Design Yourself series the group worked with artist Erica Scourti. For the activity the participants used optical character recognition software (OCR) to generate poetry from their own handwriting and writing (leaflets, signage) found throughout the Barbican building.

The next stage in the workshop was going to be to take this extracted text and run it through a text to speech synthesizer, but unfortunately there wasn’t time to get to this stage.

One of the things I liked about the software they used was that it showed you the image of the text that it recognised and extracted, producing a kind of cut-and-paste poetry.

To make the sixth video I wanted to somehow utilize this OCR and text-to-speech process and make a video collage of words and synthesized speech. The challenge was finding a way to do this using only open source software. Finding open source OCR software that works on Linux is not a problem. After a while I discovered that Tesseract is the gold standard for OCR software and that most other software act as frontends or interfaces to it. Here’s a few examples:

However, they all output only the text, and not the image of the extracted text. I’m aware that my use case is quite specific so I don’t blame the developers for this.

Eventually I took to Twitter and Mastodon with my questions. _vade pointed to a bug report on Tesseract which showed that getting the coordinates of recognised words is possible in Tesseract. If I knew the coordinates of words then perhaps I could use that to extract the image of the word. However, doing it this way required using it’s C interface and learning C wasn’t feasible at the time.

After some further digging around Tesseract I found a bug report that makes reference to hOCR files:

hOCR is an open standard of data representation for formatted text obtained from optical character recognition (OCR). The definition encodes text, style, layout information, recognition confidence metrics and other information using Extensible Markup Language (XML) in the form of Hypertext Markup Language (HTML) or XHTML.

This file looked like it contained the coordinate data I needed, and obtaining such a file from Tesseract was as simple as running one command. The next task was finding a tool (or tools) to interpret hOCR files. Here’s a selection, which should really be added the the previous list to form a mega-list:

hocr-tools proved to be the most feature complete and stable. It runs on the command line, which opens it up for easy automation and combining with other programs. After reading the documentation I found a process for extracting the images of words and even making videos from each word/sentence with synthesized speech. Here’s how I did it:

Generate hOCR file

First I needed to generate a hOCR file using Tesseract. For the example I used the first page from the first chapter of No Logo by Naomi Klein.

tesseract book.jpg book hocr

This produced a file called book.hocr. If you look at the source code of the file you can see it contains the bounding box coordinates of each word and line.

Extract images

Using the hOCR file I can extract images of the lines

hocr-extract-images -P 10 book.hocr

This generates both pngs of each line and also a corresponding text file containing the text

Text to speech

Using eSpeak I can generate a wav file of a synthesized voice reading each line.

for file in *.txt ; do espeak -z -f $file -w ${file%.*}.wav ; done

Make video clips

Finally, I needed to combine the png image of the text with the wav file of the synthesized speech into a video

for i in $(seq -f "%03g" 1 83) ; do ffmpeg -loop 1 -i line-$i.png -i line-$i.wav -c:v libx264 -tune stillimage -vf scale="width=ceil(iw/2)*2:height=ceil(ih/2)*2" -pix_fmt yuv420p -shortest -fflags +shortest ${i%.*}.mp4 -y ; done

I used fflags because without it the video was always adding a couple of seconds of silence.

By the end of this I had a folder full of lots of files.

Voila!

The last part of this was to manually arrange the video clips into a video collage. I made this example video to demonstrate to the group what could be done.

Getting to this point took some time but with what I’ve learnt I can replicate this process quickly and simply. In the end the group decided no to use OCR to generate text and instead wrote something themselves. They did still use text-to-speech software and even filmed themselves miming to it. Here’s the finished video:

This was the last video I made for the Design Yourself project. I’ve written about techniques used to make older videos in past blog posts. Go read the Barbican website for more information on the project

Copy Paste photos

Photos of the Copy Paste exhibition currently taking place at Piksel in Bergen, Norway, and online in the Piksel Cyber Salon

Copy Paste

Copy Paste

Copy Paste

Copy Paste

Copy Paste

Copy Paste

Copy Paste

Full list of exhibiting artists inculde Carol Breen, Constant, LoVid, Lorna Mills, Matthew Plummer-Fernandez + Julien Deswaef, Duncan Poulton, Eric Schrijver, Peter Sunde.

Photos taken by Maite Cajaraville. More photos can be seen here.

If you have the opportunity to see the exhibition in person please do! It’s open until 21st June.

Copy Paste opening

Copy Paste opened at Piksel in Bergen on the evening of Friday 22nd May. Sadly myself and all of the exhibiting artists were unable to be there but fortunately they live streamed the whole thing.

After all of the uncertainty about whether Copy Paste would go ahead I’m really happy that situation in Bergen has been good enough for the exhibition to welcome visitors. It was sad to not to be in Bergen myself to see everything IRL but I’m thankful to Maite and Gisle, Directors of Piksel, for handling all of the logistics and installation of the works.

Copy Paste exists in two spaces. In the physical studio space visitors can find works by Carol Breen, Constant, Lorna Mills, Duncan Poulton, Eric Schrijver, and Peter Sunde.

Here’s a pixelated look at some of the artworks as captured by me in the UK from the livestream:

Copy Paste also exists as a virtual online exhibition in the Piksel Cyber Salon.

The space is built using Mozilla Hubs and works in your web browser (or VR headset if you have one). In the Cyber Salon you can find works by LoVid, Matthew Plummer-Fernandez + Julien Deswaef, Carol Breen, and Duncan Poulton.

Many thanks Malitzin Cortes for designing this space. You can visit it at any time and all of the live streamed events will also be streamed to there.

Events

Speaking of events check out these upcoming events happening as part of Copy Paste!

Curator’s Tour

24th, 31st, 7th, 14th, 21st June 13:00 – 14:00
Each Sunday at 13:00 – 14:00 CEST I’ll be giving a tour of the exhibition (remotely, obvs), talking a bit about each artwork and how they contribute to the exhibition and explore ideas around copying.

Live Coding Algorave Performance with Alex McLean and Antonio Roberts

29th May 23:00 – 00:00 CEST
On 29th May 23:00 – 00:00 myself and Alex McLean will be doing a live coding performance. Alex will be doing his usual patterns of sample based music and visually I’ll be mixing things up a bit.

Authors of the Future

6th June 18:00 – 20:00 CEST
An online presentation from Constant of Authors of the Future, with a focus on the Cinemas Sauvage license. This license shows the pitfalls and fun (im)possibility of coming to an agreement with a bunch of anarchist people who do not want to agree on a rule.

Internet Archaeology for Beginners

7th June 16:00 – 18:00 CEST
Join artist Duncan Poulton on 7th June 16:00 – 18:00 CEST for a virtual workshop which offers an introduction to techniques for mining and misusing the web for creative reuse. Attendees will visit the depths of the internet that search engines don’t want you to find, and learn to make their own digital collages from the materials they gather.

To book onto Duncan’s workshop and find out more about the other eents send an e-mail to piksel20(at)piksel(dot)no

Hope y’all enjoy the exhibitoin!

Seamless Looping Neon Trail in Blender

I’d like to return to the fifth video in the Design Yourself series to show how i did a glowing neon trail. The video is heavily themed around robots, and if you look in the background you’ll see that it’s actually a circuit board.

The circuit diagram was a random one I built using the rather excellent Fritzing software. If you’re ever looking for high quality SVG illustrations of electrical components then Fritzing is a great resource. I brought the exported SVG diagram into Blender to illustrate it a bit.

If you look closely you can see that the circuit board has a glowing trail. To achieve this effect I followed this tutorial:

At around 6:00 the author tries to find the point where the neon trail position loops but does it visually. At first I was doing the same but then I remembered that in the past I had faced a similar problem when trying to loop the Wave texture. To get an answer for that question I consulted the Blender Stackexchange site

I adapted this a bit and came up with the following solution: To seamlessly loop the neon trail effect first insert keyframe with the Value of the Add node set to 0. Move to a point along the timeline where you want it to loop and add another keyframe to the Value of the Add node and type (0.3333*pi)/$scale (but replace $scale with whatever the Scale of the Wave texture is). My node setup is the same as in the video but here it is as well:

click to embiggen

Now when you play the animation the neon trail effect will loop seamlessly!

Typewriter Text Effect Revisited

For the fifth video in the Design Yourself series I was faced yet again with the task of doing a typewriter text effect. Yay… For each of the videos the participants wrote a poem to go with it. The poems were a really important part of the video so they needed to have a prominent role in the video beyond standard Youtube subtitles. At the time I was producing the video I didn’t yet know whether or not I wanted to use the typewriter text effect but I certainly wanted to explore it as a possibility. One of the first times I tried to achieve this was back in 2019 when I was making the video to promote the Algorave at the British Library.

Since making that video, to add subtitles to a the second Design Yourself video I was using a Natron plugin that would allow synced playback of an audio file via VLC. By using this approach I could seek through the audio and add keyframes to the Text node when the text changed. This mostly worked but sometimes the audio did go out of sync. At some point afterwards I wondered if I could offload the editing of the subtitles – and maybe even subtitles with a typewriter text effect – onto another more specialised program and then import that into Natron later for compositing. Subtitle editing software already exist and are much better suited to editing subtitles than Natron.

I explained this on the Natron forum to one of the developers and some time later the Text node gained the ability to read an SRT subtitle file! An SRT file subtitle file format that is widely used on video files. If you open one up you can see exactly how it works:

The way the SRT reading function works in the new Text node is it basically gets the time stamps of the text in the SRT file and assigns them to keyframes in the Text node.Yay! 🙂

The next stage in the equation was to find a subtitle editor capable of doing a typewriter text effect. There are several open source solutions out there including Subtitle Composer and Gaupol. One of the available software, Aegisub, has quite the feature set. It has a karaoke mode which, as you would expect, can let you edit the timing of words as they appear on screen.

This sounds like the solution to my problem of getting a typerwriter text effect but there’s one big problem. The karaoke text mode only works if the file is exported as an .ssa file format. The SubStation Alpha file format supports lot of formatting options, including typewriter-like text effects. This is good except for Natron only supports .srt file format, and even if it did support .ssa files, I’d still want control over formatting.

To make this work in an SRT file what I needed was for , at user defined poitns, each word to be appended. For example:

1
00:00:00,000 --> 00:00:01,000
Some

2
00:00:01,000 --> 00:00:02,000
Some BODY

3
00:00:02,000 --> 00:00:03,000
Some BODY once

4
00:00:03,000 --> 00:00:04,000
Some BODY once told me

5
00:00:04,000 --> 00:00:05,000
Some BODY once told me the

6
00:00:05,000 --> 00:00:06,000
Some BODY once told me the world

7
00:00:06,000 --> 00:00:07,000
Some BODY once told me the world is

8
00:00:07,000 --> 00:00:08,000
Some BODY once told me the world is gonna

9
00:00:08,000 --> 00:00:09,000
Some BODY once told me the world is gonna roll

10
00:00:09,000 --> 00:00:10,000
Some BODY once told me the world is gonna roll me

It was clear that at this time the Aegisub SRT export can’t do this so in the meant time I made a feature request and reverted back to using the method I described in July 2019 Development Update which makes use of Animation Nodes.

But even then I had a few issues. To synchronise the text with the speech I was expecting that I could use keyframes to control the End value in Trim Text node. However, as explained by Animation Nodes’ developer, the keyframes of custom nodes aren’t visible on the Dope Sheet and so animate anyway. To get around this I used the suggestion by the developer:

I suggest you create a controller object in the 3d view that you can animate normally. Than you can use eg the x location of that object to control the procedual animation.

This worked in the viewer but not when I rendered it. What I instead got was just one frame rendered multiple times. This problem has been reported in Animation Nodes but the solutions suggested there didn’t work. Neither did doing an OpenGL render.

However, I came across this interesting bug repot . It seems like the crashing I was experiencing back in 2019 is a known bug. The solution suggested there was to use a script to render the project instead of the built in render function.

import bpy

scene = bpy.context.scene
render = scene.render
directory = render.filepath

for i in range(scene.frame_start, scene.frame_end):
scene.frame_set(i)
render.filepath = f"{directory}{i:05d}"
bpy.ops.render.render(write_still = True)

render.filepath = directory

The downside of this script is that it doesn’t show the animation being rendered. It also doesn’t allow you to set the start or end point of the render but that is easily accomplished by changing the range in line 7.

After all of that I was able to render the text with the rest of the video! Finished video is below:

I’m getting one step closer to being able to easily create and edit typewriter text using open source software…