Seamless Looping Neon Trail in Blender

I’d like to return to the fifth video in the Design Yourself series to show how i did a glowing neon trail. The video is heavily themed around robots, and if you look in the background you’ll see that it’s actually a circuit board.

The circuit diagram was a random one I built using the rather excellent Fritzing software. If you’re ever looking for high quality SVG illustrations of electrical components then Fritzing is a great resource. I brought the exported SVG diagram into Blender to illustrate it a bit.

If you look closely you can see that the circuit board has a glowing trail. To achieve this effect I followed this tutorial:

At around 6:00 the author tries to find the point where the neon trail position loops but does it visually. At first I was doing the same but then I remembered that in the past I had faced a similar problem when trying to loop the Wave texture. To get an answer for that question I consulted the Blender Stackexchange site

I adapted this a bit and came up with the following solution: To seamlessly loop the neon trail effect first insert keyframe with the Value of the Add node set to 0. Move to a point along the timeline where you want it to loop and add another keyframe to the Value of the Add node and type (0.3333*pi)/$scale (but replace $scale with whatever the Scale of the Wave texture is). My node setup is the same as in the video but here it is as well:

click to embiggen

Now when you play the animation the neon trail effect will loop seamlessly!

Typewriter Text Effect Revisited

For the fifth video in the Design Yourself series I was faced yet again with the task of doing a typewriter text effect. Yay… For each of the videos the participants wrote a poem to go with it. The poems were a really important part of the video so they needed to have a prominent role in the video beyond standard Youtube subtitles. At the time I was producing the video I didn’t yet know whether or not I wanted to use the typewriter text effect but I certainly wanted to explore it as a possibility. One of the first times I tried to achieve this was back in 2019 when I was making the video to promote the Algorave at the British Library.

Since making that video, to add subtitles to a the second Design Yourself video I was using a Natron plugin that would allow synced playback of an audio file via VLC. By using this approach I could seek through the audio and add keyframes to the Text node when the text changed. This mostly worked but sometimes the audio did go out of sync. At some point afterwards I wondered if I could offload the editing of the subtitles – and maybe even subtitles with a typewriter text effect – onto another more specialised program and then import that into Natron later for compositing. Subtitle editing software already exist and are much better suited to editing subtitles than Natron.

I explained this on the Natron forum to one of the developers and some time later the Text node gained the ability to read an SRT subtitle file! An SRT file subtitle file format that is widely used on video files. If you open one up you can see exactly how it works:

The way the SRT reading function works in the new Text node is it basically gets the time stamps of the text in the SRT file and assigns them to keyframes in the Text node.Yay! 🙂

The next stage in the equation was to find a subtitle editor capable of doing a typewriter text effect. There are several open source solutions out there including Subtitle Composer and Gaupol. One of the available software, Aegisub, has quite the feature set. It has a karaoke mode which, as you would expect, can let you edit the timing of words as they appear on screen.

This sounds like the solution to my problem of getting a typerwriter text effect but there’s one big problem. The karaoke text mode only works if the file is exported as an .ssa file format. The SubStation Alpha file format supports lot of formatting options, including typewriter-like text effects. This is good except for Natron only supports .srt file format, and even if it did support .ssa files, I’d still want control over formatting.

To make this work in an SRT file what I needed was for , at user defined poitns, each word to be appended. For example:

1
00:00:00,000 --> 00:00:01,000
Some

2
00:00:01,000 --> 00:00:02,000
Some BODY

3
00:00:02,000 --> 00:00:03,000
Some BODY once

4
00:00:03,000 --> 00:00:04,000
Some BODY once told me

5
00:00:04,000 --> 00:00:05,000
Some BODY once told me the

6
00:00:05,000 --> 00:00:06,000
Some BODY once told me the world

7
00:00:06,000 --> 00:00:07,000
Some BODY once told me the world is

8
00:00:07,000 --> 00:00:08,000
Some BODY once told me the world is gonna

9
00:00:08,000 --> 00:00:09,000
Some BODY once told me the world is gonna roll

10
00:00:09,000 --> 00:00:10,000
Some BODY once told me the world is gonna roll me

It was clear that at this time the Aegisub SRT export can’t do this so in the meant time I made a feature request and reverted back to using the method I described in July 2019 Development Update which makes use of Animation Nodes.

But even then I had a few issues. To synchronise the text with the speech I was expecting that I could use keyframes to control the End value in Trim Text node. However, as explained by Animation Nodes’ developer, the keyframes of custom nodes aren’t visible on the Dope Sheet and so animate anyway. To get around this I used the suggestion by the developer:

I suggest you create a controller object in the 3d view that you can animate normally. Than you can use eg the x location of that object to control the procedual animation.

This worked in the viewer but not when I rendered it. What I instead got was just one frame rendered multiple times. This problem has been reported in Animation Nodes but the solutions suggested there didn’t work. Neither did doing an OpenGL render.

However, I came across this interesting bug repot . It seems like the crashing I was experiencing back in 2019 is a known bug. The solution suggested there was to use a script to render the project instead of the built in render function.

import bpy

scene = bpy.context.scene
render = scene.render
directory = render.filepath

for i in range(scene.frame_start, scene.frame_end):
scene.frame_set(i)
render.filepath = f"{directory}{i:05d}"
bpy.ops.render.render(write_still = True)

render.filepath = directory

The downside of this script is that it doesn’t show the animation being rendered. It also doesn’t allow you to set the start or end point of the render but that is easily accomplished by changing the range in line 7.

After all of that I was able to render the text with the rest of the video! Finished video is below:

I’m getting one step closer to being able to easily create and edit typewriter text using open source software…

Design Yourself: I Never Found Her

In the sixth and final workshop of our Life Rewired inspired Design Yourself project, the Young Creatives worked with artist and writer Erica Scourti.

Erica shared her practice with the group and explored Optical Character Recognition software and speech to text translation processes, to interrogate how identity and human understandings are influenced by these now everyday filters.

Cosima Cobley Carr worked with fellow member and composer Pietro Bardini on a soundscape using a sax line shared by Nayla Chouaib. The video shows us phone portraits of Vangelis Trichias along with Cosima, lip-syncing along with text to speech software. Pietro Bardini created the backing soundscape by taking Nayla Chouaib’s saxaphone recording and taking it through several layers of resonators, reverbs and distortions.

More information here: https://www.barbican.org.uk/read-watch-listen/design-yourself-i-never-found-her

Design Yourself: Party for the End of the World

In the fifth workshop of our Life Rewired inspired Design Yourself project, the Young Creatives worked with New Movement Collective and Fenyce Workspace who led a practical workshop introducing the group to their prototype project XO.

Through exploring how machines can control and aid human movement through choreography, the group looked at and interacted with project XO, a participatory dance experience. During the session they discussed power control, agency and empathy in the context of robotic interactions and imagined how to push the boundaries of interactivity in multi-user digital experiences. As a response to the workshop artists Pietro Bardini and Tice Cin worked with Antonio Roberts to create Party for the end of the world.

More information here: https://www.barbican.org.uk/read-watch-listen/design-yourself-party-for-the-end-of-the-world

Design Yourself: Evasive Techniques

In the fourth workshop of our Life Rewired inspired Design Yourself project, the Young Creatives worked with Yoke Collective in a workshop focused on the implications of facial recognition technology.

The group used Yoke Collective’s method of harnessing make up and hair extensions to avoid detection from facial recognition technologies. Combined with the creation of digital masks via SPARK AR as the cornerstone for our video, they explored the dynamics of power and privacy in the digital age. As a response to the workshop, artists Pietro Bardini and Vangelis Trichias worked with Antonio Roberts to create Evasive Techniques.

More information here: https://www.barbican.org.uk/read-watch-listen/design-yourself-evasive-techniques

Design Yourself: Feeling the Gallery – Making sound and music out of visual data

In the third workshop of the Life Rewired inspired Design Yourself project, the Young Creatives worked with artist Matthew DF Evans in a workshop that turned the Barbican into a composition using pixel sonification.

Using Matthew Evan’s pixel sonification method to turn the Barbican into a composition, we considered the physics of sound and how it passes through you. We are active gestural instruments, technologically enhancing sounds without even noticing. Our mouths filter sound: through every obstructive piece of biomatter, we create resonance and sound decay. We have suppressors in our ears that dampen sound as a means to protect ourselves. In this way, humans are substractive sythesisers.

More information here: https://www.barbican.org.uk/read-watch-listen/design-yourself-the-barbican

Design Yourself: This home was not built to last

In the second workshop of our Life Rewired inspired Design Yourself project, the Young Creatives worked with artist Laurie Ramsell who led a practical workshop exploring the concept of ‘human’.

Through exploring trans-human and post-human philosophies, the group looked at examples of Laurie’s work which examine construct of personhood and how it has been imbued into our culture. Together they created new work that explores the notion of what makes us human and imagined how the label of ‘human’ could be applied in an increasingly digital future. As a response to the workshop artists Pietro Bardini, Tice Cin and Hector Dyer worked with Antonio Roberts to create This home was not built to last.

More information here: https://www.barbican.org.uk/read-watch-listen/design-yourself-what-is-human

Design Yourself: Augmented Bodies

In the first session of our Life Rewired inspired Design Yourself project, the Young Creatives explored how people are augmenting their bodies with technology.

We looked at examples in science fiction and current day of people augmenting their bodies with technology. Currently a lot of this exists as wearable devices that read our bodily functions, present us data and affect our bodies on an external level. Before we started to look at technology implants I invited each participant to create a mask that would act as a piece of wearable technology that would change them somehow.

More information here: https://www.barbican.org.uk/read-watch-listen/design-yourself-augmented-bodies

Design Yourself

Throughout 2019 and the early part of 2020 I led a programme for Barbican’s Young Creatives called Design Yourself.

What does it mean to be human?
Can technology be used to replicate the pheromone communication of ant colonies?
Can we use technology to mimic the camouflage abilities of chameleons?
Can movement be used as a language, similar to the waggle dance of honey bees?

Inspired by Life Rewired, a collection of young creatives from our Barbican Young Creatives and BA Performance and Creative Enterprise will respond to these questions to explore what it means to be human when technology is changing everything.

Mentored by visual artist Antonio Roberts and in collaboration with four guest artists, the group will create new digital work that explores how scientific and technological advances could allow artists to become ‘more human’ by heightening our natural and creative instincts. As a group they will explore technological impact on sound, movement, language and aesthetics and share their findings through new imaginative works.

The eight participants from Barbican’s Young Creatives were Tice Cin, Zack Haplin, Cosima Cobley Carr, Pietro Bardini, Nayla Chouaib, Evangelos Trichias, Hector Dyer, and Cleo Thomas.

I had the pleasure of inviting some of my favourite artists/art groups to deliver workshops to the participants exploring lots of issues surrounding our relationship with technlogy and the future of humanity. Invited artists were: Laurie Ramsell, Matthew DF Evans, Yoke Collective, New Movement Collective, Erica Scourti.

Over the next few days I’ll be sharing the videos we made over the year and some photos from each session.

Congrats to all of the participants on creating such great work, thanks to the invited artists for delivering engaging workshops, and thanks to Chris Webb for inviting me to Barbican again to work with their Young Creatives 🙂