Between projects I often make little animations. Sometimes they’re made as a result of learning new software and sometimes I make them to test out an idea or technique.
I’ve decided that I’ll start a regular thing on this here blog – hopefully monthly – where I share some of those gifs and other little animations.
The ideas for the above gifs came from self-portrait I made for the first issue of This and That zine.
I really liked the texture I had used on the face and decided to make some random animations (similar to the ones I did for Improviz) and use the same texture. I did some post processing using Natron (e.g. the pixealation and desaturation).
Exactly 10 years ago the first GLI.TC/H was starting in Chicago, IL. Attending that festival was turning point in my practice and, the more a reflect on it, an important part of my personal life. Here I want to reflect on that a bit.
GLI.TC/H is an international gathering of noise & new media practitioners in Chicago from September 29 thru October 03, 2010!
GLI.TC/H features: realtime audio & video performances with artists who misuse and abuse hardware and software; run-time video screenings of corrupt data, decayed media, and destroyed files; workshops and skill-share-sessions highlighting the wrong way to use and build tools; a gallery show examining glitches as processes, systems, and objects; all in the context of ongoing dialogues that have been fostered by experimentation, research, and play. GLI.TC/H is a physical and virtual assembly which stands testament to the energy surrounding these conversations.
Projects take the form of: artware, videos, games, films, tapes, code, interventions, prints, plugins, screen-captures, systems, websites, installations, texts, tools, lectures, essays, code, articles, & hypermedia.
In 2010 I was definitely in a much different place than I am now. I was three years out of university, living in Birmingham and struggling to find my place as an artist. What I was missing, besides paid artistic opportunities, was a community of like-minded people. My life wasn’t completely devoid of artistic activities: I had connected with Constant in Brussels, Belgium and took part in several of their activities; I had started fizzPOP with Nikki Pugh, which opened my eyes to what was possible with technology on a technical level; Being part of/around A.A.S Group taught me a lot about collective noise and art making; BiLE got me thinking about live performance and was my introduction to live visuals. Still, I was looking for more places I could get creative with technology and meet artists using technology. At the time I believed I said I was looking for “software artists”.
Discovering glitch art in 2009 certainly set me on a path to finding that community. From the early days of reading stAllio’s databending tutorials I found myself engrossed in all that it could offer, and it offered quite a lot! The glitch artists freely shared their techniques, code, theories and thoughts on glitch and glitch art. It was really refreshing to see people being so open, especially having come out of universities where knowledge is a luxury accessible only to those with money or willing to accrue debt. Even post university I was put off by tutorials and exhibiting opportunities that were behind paywalls or “pro” subscription models. I would eventually join this sharing with when I documented how to Databend using Audacity.
Anyone who knew me at that time would tell you how much glitch art excited me! It was the perfect combination of art, programming and creative exploration. The added randomness inherent in glitch art practices just adds further to the intrigue.
When the announcement of the GLI.TC/H event dropped in my inbox I was really excited! Having my I Am Sitting in A Room video exhibiting there was exciting in itself but what I looked forward to most was meeting all of the people behind the user names whose work I admired. The e-mail communications have long since been deleted, but in that short period between 2009 to mid 2010 I think I had already started dialogues with artists such as Rosa Menkman and Nick Briz and so being able to be around them (and other glitch artists) and exchange knowledge and skills IRL was cool!
I hopped on a plane (the plane ticket being gifted to me as a birthday present) and a short 9 hours later I landed at Chicago O’Hare in the early morning and was greeted at the airport by a smiling Nick Briz. I arrived a couple of days before GLI.TC/H started and so I spent my time meeting other artists, staff and students at SAIC (such as Jon Cates and Jon Satrom, and helped everyone at the venues to get the exhibitions ready.
I immediately felt like I had found the community I was looking for. Everyone I met was so welcoming and friendly. It definitely helps that we were all there because of our shared interest in glitches, but even without this uniting factor everyone was approachable and made the most of the fact we were there in the same place IRL.
The days and events tat followed was, well, probably one of the best weeks I had of that time. Lots of parties, exhibitions, lectures, presentations, beers, and the biggest pizza I ever had!
I made a very glitchy video diary of my time there:
Arriving back in Birmingham I was fully inspired! I had a glimpse of the kinda of community I wanted to see and so put everything into bringing that same spirit and approach to digital art to Birmingham. In the following year I was a guest curator for GLI.TC/H in Birmingham at VIVID. This started my relationship with VIVID (and later Vivid Projects), which carried on for many years and gave me the opportunity to organise more experimental digital art things such as BYOB, Stealth, No CopyrightInfringement Intended, and the various exhibitions at Black Hole Club.
Going to GLI.TC/H really benefited my confidence as an artist. It came at a time where I was struggling a lot but being around a community of friendly people showed me that there was a place – both and offline – for the weird glitchy stuff that I wanted to make!
I’ve been following the practices of many of the people I met and it’s been inspiring watching them develop and see how, or even if, glitch art continues to be a part of it. Personally glitch art still is a part of my practice but more as a tool and method rather than the conceptual focus.
I’ll wrap up now and say that GLI.TC/H was great! Thanks to the GLI.TC/H Bots for making it happen.
This ongoing adventure to create a typewriter text effect has had a lot of twists and turns over the years. Back in 2011 I used Pure Data to achieve this effect. Fast forward to 2019 and I experimented with Kdenlive and Natron before settling on Animation Nodes. In April 2020 update on this I detailed how I used Animation Nodes and attempted to use Aegisub to create this effect. Around the same time I had started experimenting with expressions in Natron to achieve the same effect.
The value of a parameter can be set by Python expressions. An expression is a line of code that can either reference the value of other parameters or apply mathematical functions to the current value.
The expression will be executed every times the value of the parameter is fetched from a call to getValue(dimension) or get().
In theory with Natron expressions I could created a counter that would increment on every frame and type words out character by character. Y’know, like a typewriter. I’m forever learning Python so after a lot of effort, and a lot of help from people on the Natron forum I came up with the following solution. In the Text node I entered the following expression:
originalText = original.text.get()
output = " "
ptr = 0
slowFac = 4
for i in range(frame/slowFac, len(originalText)+1):
if frame/slowFac < len(originalText):
ptr=frame/slowFac
else:
ptr=len(originalText)
ret = originalText[0:ptr]
The typewriter text effect starts from 01:04. The same Natron user also posted an alternative solution.
I noticed a bug which meant that I couldn’t change the speed that the letters typed out at. One method of speeding up the text would be to use ret = text[:frame*2-1] or a different multiplier. However, I wanted something a little bit more precise, so I thought about using the Retime node. Unfortunately there was a bug which prevented this. The workaround of using a Constant node worked. In the end it got fixed, but not in time for making that Design Yourself video.
In June I was asked if I could make an intro video for Network music Festival. The organisers wanted around 10 slides of text to appear throughout the video. Some had only several words on them but some had large blocks of text.
I already decided that I wanted to use the typewriter text effect to make the text appear and then to hold that text for a couple of seconds. This presented an interesting problem. Without a Retime node the text appears one character per frame. With a large block of text 250 characters in length (including spaces) this would take, well, 240 frames to appear, which at 24 fps would be 10 seconds. The organisers wanted the video to be about a minute long, so having one slide take up 10 seconds would be far too long.
What I needed was a method for making an arbitrary amount of text to appear within a specific time/frame count. My final Natron expression (after a bit of bug fixing) looked like this.
text = Source.text.get()
letter= 0
# what frame to start triggering the write-on effect
trigger = 15
# how many frames it'll take to write the full text
length = 46
# map values. Taken from herehttps://stackoverflow.com/a/1969274
def translate(value, leftMin, leftMax, rightMin, rightMax):
# Figure out how 'wide' each range is
leftSpan = leftMax - leftMin
rightSpan = rightMax - rightMin
# Convert the left range into a 0-1 range (float)
valueScaled = float(value - leftMin) / float(leftSpan)
# Convert the 0-1 range into a value in the right range.
return rightMin + (valueScaled * rightSpan)
if frame >= trigger:
letter = int(ceil(translate(frame-trigger, 1, length, 1, len(text))))
else:
letter = 0
ret= text[:letter]
This expression does several things. It first allows a user to specify at which frame the text will appear (trigger). Then, no matter how much input text there is it will be mapped to the length value. Oddly Python doesn’t have a built in mapping function so I had to use the one from here. Unfortunately it doesn’t work as expected if your Text node has keyframed text changes. So, for that you’ll have to have multiple Text nodes. Here’s the finished Network Music Festival video.
Copy Paste is back and will be taking part in Ars Electronica from 9th – 13th September!
This year Ars Electronica’s online programme features exhibitions and events from 120 locations globally and Piksel in Bergen, the original host of the Copy Paste exhibition back in May, is one of them.
For links to all of these events happening with Piksel at the Cyber Salon see here. Thanks to Piksel for making this happen and Ars Electronica for hosting.