Development Update – July 2019

Select objects of similar size in Inkscape

For the AlgoMech 2019 festival in June I created a new performative drawing piece, A Perfect Circle. The piece is about how we interface with computers that analyse our activities. It consists of a video and accompanying plotter drawings.

Making A Perfect Circle presented me with a few challenges. The make the video element I hacked together a couple of Processing scripts that did basic motion tracking by following a user-specified colour. It would draw these lines, creating new lines (instead of adding to an existing line) at each major turn and giving them a unique colour.

The next stage was to export those drawn lines to SVGs (or PDFs) so that I could export them to Inkscape and then a plotter. Fortunately Processing already has functions for exporting to SVGs. Unfortunately for me if I were to implement this as is suggested in the help file it would export both the drawn line and the background video as a still frame. I produced a very hacky workaround (with help from Ben Neal) which “works” but produces a few unwanted artefacts.

Before I go on I should probably explain what a plotter is as the unwanted artefacts relate to it. For this I will copy from the Wikipedia article on plotters:

The plotter is a computer printer for printing vector graphics. Plotters draw pictures on paper using a pen. In the past, plotters were used in applications such as computer-aided design, as they were able to produce line drawings much faster and of a higher quality than contemporary conventional printers, and small desktop plotters were often used for business graphics.

At home I have a Silhouette Cameo 2 vinyl cutter. When using this great Inkscape plugin I can bypass Silhouette’s proprietary software and send artwork directly to the cutter from Inkscape. Thanks to a pen holder adaptor I can replace the vinyl cutting blades with a pen and turn the vinyl cutter into a plotter 🙂

Back to the Processing sketch. The hacky code that I made produced the desired lines but also it had lots of additional single-node paths/dots at the start of each line.

Removing these wouldn’t be very easy. Using Edit > Select Same > Fill and Stroke or Fill Color or any of the other options wouldn’t work as it would also end up selecting the lines. I then had the bright idea to select objects based on their size. All of the dots had a dimension of 4.057×4.000px, so in theory there would be an option like Edit > Select Same > Size. However, this is not so.

After a discussion on the Inkscape forum I opened a feature request on the Inkscape bug tracker to select objects of similar size. One thing I added to this was the idea of a threshold. Using this you could select objects that were within n% of the size of the selected object. If you’ve ever used GIMP you would have seen a similar function in its fuzzy selection tool This could definitely be useful if you trace bitmaps and it produces a lot of speckles. I also added a mockup to show how it could be applied to other options in the Edit > Select Same menu options.

Anyway, at the moment this exists as a feature request. I think Inkscape is concentrating on delivering version 1.0 of the software so I don’t expect to see this implemented any time soon. As with anything in the land of open source, if you’ve got the skills to do this please contribute!

In the end I used fablabnbg’s Inkscape extension to chain all (or most) of the paths into one big path. This made selecting the dots easier as I could just hide the big path(s) once they were chained together.

After that it was a simple case of sending it to the plotter!

Development Update – June 2019

Making digital art is quite a lengthy process and even moreso if you’re using non standard processes or making your own software. For awhile I’ve wanted to write about my processes and how I’ve overcome the bugs and problems. In what will hopefully be a regular series of blog posts I’m going to give a bit of insight into this process. Let’s go!

Convert Object texture coordinates to UV in Blender

For Visually Similar I wanted to texture each 3D model using lots of images found on the internet. Rather than create one single material containing a texture with all of the found images I instead decided I would add a material for each image texture and, using their alpha channels, composite them over each other.

If you’ve ever had to position something accurately on a UV map you’ll know how much of a pain it can be. So fortunately, in the Texture Coordinate node you can use the Object outlet to another object (usually an empty) as the source of its coordinates. This uses the reference object’s local Z direction as its up direction.

So far,so good, except it did not yet work in Blender’s new EEVEE rendering engine. Yes, yes, I know EEVEE is still under development and shouldn’t be used in production etc. Still, after doing a bit of research it looks like this is going to be implemented.

So, I had a rather smrat idea as a workaround. Could I take the UV coordinates generated by the Object oulet whilst using Cycles and paste those into the UV texture options using a Mapping node? Short answer: no. To do this I would need some sort of viewer or analyser node that would show me the data being output from a node. So, I suggested this idea on the Right-Click Select ideas website. A healthy discussion followed and hopefully something will come of it.

In the end I had to resort to baking the texture and then applying that to the 3D model. In doing this I learnt that baking a UV texture on a complex model will take a lifetime, and so I had to do it on a decimated model and then put that on the original, complex model. This, of course, created some unwanted artefacts. *sadface*

Since I originally encountered this problem it has actually been addressed in a Blender update! However, it only works at render time but it’s progress! 🙂

The search for a GrabCut GUI

Another big part in creating the Visually Similar artwork was the image textures themselves. The idea for the piece is that the textures would be related in some way to the 3D model. I decided from the beginning that I wanted to have some control over this and so I gathered the images through keyword searches and reverse image searches.

But then I needed to cut out certain parts of them. I wanted it to look like a rough collage, as if the images were pages in a magazine that had been ripped out, leaving behind tears and occasionally ripping through the important bits.

For awhile on of my Twitter friends, _xs, has had a bot on their feed that generates random collages. I haven’t studied the source code extensively but I’m guessing it does a keyword search and makes a collage out of the returned images.

What I was really interested in was how the images were cut out. It’s as if a sort of automatic feature extraction was used but wasn’t very accurate and so it left behind jagged edges that were almost reminiscent of the kind of ripped magazine aesthetic that I mentioned earlier.

Through a conversation with them I learned that they used a combination of automated object detection (to select the region of interest) and GrabCut to perform this automatic foreground extraction. Grabcut has been part of OpenCV for quite some time. Give it a region of interest (ROI) and it will attempt to extract the foreground.

_xs used this via the command line and automated the whole process. I needed a bit more control over defining the region of interest and so I needed a GUI where I could use a bounding box to select this. This is where the long hunt began.

OpenCV has its own GrabCut GUI example but it has an annoying flaw.

To select the ROI it displays the source image at full size. Meaning that if your source image is 4000 pixels wide it won’t fit on your screen (unless you have a fancy pants 4K screen). Not ideal when trying to select an ROI. What I needed was a way to scale the window to fit on my screen but still process a full resolution image.

If you search Github you’ll see a number of people have created GUIs for GrabCut, possibly for study assignments. However, each has their own problems. Some won’t compile, some resize the input and some have been abandoned. According to this 2006 article there was even once a GUI for GrabCut in GIMP. However, despite my best efforts I can’t seem to find it.

One night at OpenCode I learnt that OpenCV has a method for selecting an ROI! It even auto resizes the window but not the input image. Yay! So, I hacked it together with GrabCut and released my own very hacky Grabcut GUI. It appends to the file name the coordinates and dimensions of the ROI should you want to run this again but via the command line.

All this done with a mere seven days until the artwork had to be finished!

Typewriter text

For the Algorave at British Library in April I was asked to make a promotional video for it, which proved a difficult but for a very specific reason. I wanted to emphasise the liveness of live coding and show code being typed. For this I used the code supplied with Alex McLean aka Yaxu’s excellent Peak Cuts EP.

The effect of having the text appear word-by-word or letter-by-letter is often called the typewriter text effect. I’ve previously written about how to do this in Pure Data/GEM. I needed to have a bit more control than what I got in PD, and I needed to export as transparent pngs so this solution wouldn’t work.

Kdenlive once had such an effect built into its title editor. Other solutions that used Kdenlive use a mask to reveal the text, which produced more of a fading in effect that wasn’t ideal. It was also a lot of manual work! I had several hundred lines of text so doing this was going to add a lot of time.

Natron was the next contender. Since 2017 it has had a plugin for doing typewriter text but it’s a bit broken. In theory in gives me the most flexibility in how I create it but in practice I still can’t get it to render!

I also considered using ImageMagick and was even provided with a solution (that was written for Windows). As much as I like automation and command line software, for this very visual task I needed to see what I was working on.

Finally, I turned to Blender, which gave me a few options, including rendering the text as 3D objects within the Blender project itself. After failing to get this Blender addon to work I tried using Animation Nodes. Following a tutorial I was able to set up quite a typewriter effect quite quickly. However, this is where I encountered a bug. After around 10 frames of the text were rendered the rest of the frames would take forever to render. Even in EEVEE each frame was taking about 10 minutes to render. I have no idea why this was. Perhaps it’s because 2.8 is in beta. Maybe because Animations Nodes for 2.8 is also in beta. Beta beta beta. Either way it wasn’t working.

So I thought maybe I could “bake” the animation which would remove the Animation Nodes dependency and maybe speed up the render. Sadly this was also not to be. Text objects can’t be baked 🙁

In the end I had to do an OpenGL render of the animation to pngs with a transparent background. How this differs from a normal render is that it renders the viewport as is. So if you have your gizmos on there it’ll render them out as well. Not ideal but it worked.

I would like to think it all stopped there but it did not.

Blender can have a video or series of images be a texture. However, at the time this was not possible in 2.8 using EEVEE. To my joy, however, this was implemented only a couple of days after i needed it!

So that is some insight into how I make some of my art. There’s a lot of problem solving, lots of showstopping bugs and lots of workarounds. Somewhere in that process art is made! I’m hoping to do these every month but we’ll see how that goes.

dev8d 2010

From February 2th6 to 27th I was in London for dev8d. It describes itself as:

Dev8D is 4 days of 100% pure software developer heaven. It will be intense. It will be exhilarating. It will make you a better programmer.

• Learn to use unfamiliar languages (such as Python, Ruby and Clojure)
• Team up with other developers and build rapid development projects for tech prizes
• Take part in lightning session discussions with industry experts
• Swap skills and ideas with other developers

It was indeed like some kind of heaven! I arrived later in the week on Friday, but I still learnt quite a lot. I spent a lot of my time in the Expert Zone and at GB’s Arduino workshops.

Think of the Expert Zone as a more relaxed Pecha Kucha or Ignite. You get 15 minutes to talk about or promote whatever you want. I gave a short presentation on hackerspaces, with particular emphasis on fizzPOP, and why you should either form one or join one.

The Arduino workshops, led by fizzPOP regular GB, were really fun.

img_7101.jpg (by benc)

I’ve had an Arduino ever since the Howduino event last year but I’ve never really played around with it. Being around lots of people who are in the same position really helped me to make a move in learning it. In the end I got lots of LED’s flashing and a little buzzer to play some music. A small achievement to some, but a massive leap into the world of electronics!

Being at dev8d (and at the GNOME Hackfest, which I’ll write about another time) really taught me a lot about software development. It all starts with an idea, but taking that idea and turning it into a reality I feel is best achieved in a group setting with those who can contribute ideas and skills.

Congrats to the dev8d and devCSI team for putting on a very successful event. I’ll definitely be along next year!

Ubuntu Bug Jam

Ubuntu Bug Jam

From Friday 2nd to Sunday many Ubuntu, Linux and Open Source enthusiasts descended upon the Linux Emporium to take part in the Ubuntu Bug Jam. In the words of an Ubuntu blogger, the Ubuntu Bug Jam is:

…a world-wide online and face-to-face event to get people together to fix Ubuntu bugs – we want to get as many people online fixing bugs, having a great time doing so, and putting their brick in the wall for free software. This is not only a great opportunity to really help Ubuntu, but to also get together with other Ubuntu fans to make a difference together, either via your LoCo team, your LUG, other free software group, or just getting people together in your house/apartment to fix bugs and have a great time.

This is the second time I’ve been to a bug jam. The first time I went I hadn’t even used Ubuntu, so only managed to report one bug and otherwise mostly focused on reporting stuff in Inkscape as I use it more often.

This time was a similar affair. Apart from testing out the beta of the next release of Ubuntu (the Karmic Koala) and asking for help in fixing bugs in my own system I mostly spent time testing bugs in Inkscape and suggesting features for future releases of Ubuntu.

Overall, I think reporting any bug in any package or program helps everyone and one thing I really like about open source is its transparency and honesty in its errors. That is, it’s not ashamed to admit that there are a few bugs here and there.

WordPress Theme Development

I’ve finally updated to WordPress 2.5. Great looking interface!

On a similar note I’ve finally decided that I’m going to develop my own WordPress theme. Although I have a bit of knowledge when it comes to CSS and PHP I’m not really clued up on the structure of WordPress, so have always avoided it. Here are my efforts as of today:

Progress screenshot

The overall idea is to create a WordPress theme that is better suited to a portfolio website than a blog. In the long term I want to be able to have different layouts that are more typical of portfolio websites and also allow integration with Gallery2, which itself will have a modified display to suit portfolio websites.

That is, unless anyone can recommend an existing theme?