Typewriter text effect

For the Algorave at British Library in April I was asked to make a promotional video for it, which proved a difficult but for a very specific reason. I wanted to emphasise the liveness of live coding and show code being typed. For this I used the code supplied with Alex McLean aka Yaxu’s excellent Peak Cuts EP.

The effect of having the text appear word-by-word or letter-by-letter is often called the typewriter text effect. I’ve previously written about how to do this in Pure Data/GEM. I needed to have a bit more control than what I got in PD, and I needed to export as transparent pngs so this solution wouldn’t work.

Kdenlive once had such an effect built into its title editor. Other solutions that used Kdenlive use a mask to reveal the text, which produced more of a fading in effect that wasn’t ideal. It was also a lot of manual work! I had several hundred lines of text so doing this was going to add a lot of time.

Natron was the next contender. Since 2017 it has had a plugin for doing typewriter text but it’s a bit broken. In theory in gives me the most flexibility in how I create it but in practice I still can’t get it to render!

I also considered using ImageMagick and was even provided with a solution (that was written for Windows). As much as I like automation and command line software, for this very visual task I needed to see what I was working on.

Finally, I turned to Blender, which gave me a few options, including rendering the text as 3D objects within the Blender project itself. After failing to get this Blender addon to work I tried using Animation Nodes. Following a tutorial I was able to set up quite a typewriter effect quite quickly. However, this is where I encountered a bug. After around 10 frames of the text were rendered the rest of the frames would take forever to render. Even in EEVEE each frame was taking about 10 minutes to render. I have no idea why this was. Perhaps it’s because 2.8 is in beta. Maybe because Animations Nodes for 2.8 is also in beta. Beta beta beta. Either way it wasn’t working.

So I thought maybe I could “bake” the animation which would remove the Animation Nodes dependency and maybe speed up the render. Sadly this was also not to be. Text objects can’t be baked 🙁

In the end I had to do an OpenGL render of the animation to pngs with a transparent background. How this differs from a normal render is that it renders the viewport as is. So if you have your gizmos on there it’ll render them out as well. Not ideal but it worked.

I would like to think it all stopped there but it did not.

Blender can have a video or series of images be a texture. However, at the time this was not possible in 2.8 using EEVEE. To my joy, however, this was implemented only a couple of days after i needed it!

The search for a GrabCut GUI

Another big part in creating the Visually Similar artwork was the image textures themselves. The idea for the piece is that the textures would be related in some way to the 3D model. I decided from the beginning that I wanted to have some control over this and so I gathered the images through keyword searches and reverse image searches.

But then I needed to cut out certain parts of them. I wanted it to look like a rough collage, as if the images were pages in a magazine that had been ripped out, leaving behind tears and occasionally ripping through the important bits.

For awhile on of my Twitter friends, _xs, has had a bot on their feed that generates random collages. I haven’t studied the source code extensively but I’m guessing it does a keyword search and makes a collage out of the returned images.

What I was really interested in was how the images were cut out. It’s as if a sort of automatic feature extraction was used but wasn’t very accurate and so it left behind jagged edges that were almost reminiscent of the kind of ripped magazine aesthetic that I mentioned earlier.

Through a conversation with them I learned that they used a combination of automated object detection (to select the region of interest) and GrabCut to perform this automatic foreground extraction. Grabcut has been part of OpenCV for quite some time. Give it a region of interest (ROI) and it will attempt to extract the foreground.

_xs used this via the command line and automated the whole process. I needed a bit more control over defining the region of interest and so I needed a GUI where I could use a bounding box to select this. This is where the long hunt began.

OpenCV has its own GrabCut GUI example but it has an annoying flaw.

To select the ROI it displays the source image at full size. Meaning that if your source image is 4000 pixels wide it won’t fit on your screen (unless you have a fancy pants 4K screen). Not ideal when trying to select an ROI. What I needed was a way to scale the window to fit on my screen but still process a full resolution image.

If you search Github you’ll see a number of people have created GUIs for GrabCut, possibly for study assignments. However, each has their own problems. Some won’t compile, some resize the input and some have been abandoned. According to this 2006 article there was even once a GUI for GrabCut in GIMP. However, despite my best efforts I can’t seem to find it.

One night at OpenCode I learnt that OpenCV has a method for selecting an ROI! It even auto resizes the window but not the input image. Yay! So, I hacked it together with GrabCut and released my own very hacky Grabcut GUI. It appends to the file name the coordinates and dimensions of the ROI should you want to run this again but via the command line.

All this done with a mere seven days until the artwork had to be finished!

Convert Object texture coordinates to UV in Blender

Making digital art is quite a lengthy process and even moreso if you’re using non standard processes or making your own software. For awhile I’ve wanted to write about my processes and how I’ve overcome the bugs and problems. In what will hopefully be a regular series of blog posts I’m going to give a bit of insight into this process. In a way it’ll be a tutorial. Let’s go!

For Visually Similar I wanted to texture each 3D model using lots of images found on the internet. Rather than create one single material containing a texture with all of the found images I instead decided I would add a material for each image texture and, using their alpha channels, composite them over each other.

If you’ve ever had to position something accurately on a UV map you’ll know how much of a pain it can be. So fortunately, in the Texture Coordinate node you can use the Object outlet to another object (usually an empty) as the source of its coordinates. This uses the reference object’s local Z direction as its up direction.

So far,so good, except it did not yet work in Blender’s new EEVEE rendering engine. Yes, yes, I know EEVEE is still under development and shouldn’t be used in production etc. Still, after doing a bit of research it looks like this is going to be implemented.

So, I had a rather smrat idea as a workaround. Could I take the UV coordinates generated by the Object oulet whilst using Cycles and paste those into the UV texture options using a Mapping node? Short answer: no. To do this I would need some sort of viewer or analyser node that would show me the data being output from a node. So, I suggested this idea on the Right-Click Select ideas website. A healthy discussion followed and hopefully something will come of it.

In the end I had to resort to baking the texture and then applying that to the 3D model. In doing this I learnt that baking a UV texture on a complex model will take a lifetime, and so I had to do it on a decimated model and then put that on the original, complex model. This, of course, created some unwanted artefacts. *sadface*

Since I originally encountered this problem it has actually been addressed in a Blender update! However, it only works at render time but it’s progress! 🙂

So that is some insight into how I make some of my art. There’s a lot of problem solving, lots of showstopping bugs and lots of workarounds. Somewhere in that process art is made! I’m hoping to do these every month but we’ll see how that goes.