Public Domain Day 2015 – The death of the Authors, 1943

On 7th February I’ll be back in Brussels for a performance at Public Domain Day

The Public Domain Day is celebrating works of art which enter into the public domain because their copyrights have expired. These works can be freely enjoyed, used, changed, republished by everybody. Worldwide, this The Public Domain Day is celebrated on the first of January, and this year we celebrate, with a slight delay, the works of authors and artists who died in 1944. In Brussels, the Public Domain Day is celebrated by a yearly event that is organised by Constant, CRIDS (Centre de Recherche Informatique et Droit, FUNDP Namur), Cinema Nova and the Royal Library of Belgium.

The afternoon takes place at the Royal Library and is dedicated to Antoine de Saint-Exupery, the writer of Le Petit Prince. He
is a special case because he ’died for the fatherland’ and that delivered him a copyright extension of 30 years according to the French law. In Belgium as in the rest of the world, his work is free of rights since 1th January 2015.

The evening programme takes place at Cinema Nova.

For this performance myself, Michael Murtaugh, Anne Laforet, Gijs De Heij, and An Mertens will be performing in the Botopera.

botopera

Fats Waller, Nicolas Tesla, Beatrix Potter, Sergei Rachmaninov and Henri La Fontaine are some of the many artists who have left this world during the second world war. These authors will be reanimated in the form of a “chatbot”: a small software program that automatically intervenes in a chat conversation pretending to be human. The bots, the public, the printers and the projected images work together in this interactive performance.

With : BotsWaller, NICKola tesla, Beatrix Plotter, Rachmanibot, henrIRC lafontaine & their plotter

It’s all free and starts from about 20:00

Rebellion #10 – 19th February

On 19th February I’ll be in Shrewsbury to give a talk at Shropgeek.

shropgeekcatfood

With re:LOADED – the developer unconference, just around the corner we’re changing the focus of our first Rebellion of 2015 away from specific web design and development topics and spending a little more time on business and creative disciplines instead.

We’ve invited three awesome speakers with a web or creative background to come and share their experiences, ideas, and in one case even test some samples (now you can’t normally say that about web design talks!).

And as well as all of this creative goodness that will inspire you to do even more awesome stuff there will also be time for you to share any news or announcements you may have with the ShropGeek community. What more could you want?

This promises to be a truly fantastic evening so why not come along and join us, listen to some inspiring talks, spark your creativity and chat with like-minded people over a beer or maybe even a cocktail or two.

I’ll be giving a talk about my experiences of being an artist that uses open source software to make weird (glitch) art.

Hacking creative practices

The Adobe Creative Suite, Final Cut Pro, Pro Tools and various other so-called Industry Standard software packages are presented as the tools for creative people to use to express themselves and produce art. These tools have their benefits, but filtering the creative process through these rigid programs and methods can be uninspiring for the artists and can produce samey results. In this presetntation I will talk about how I and others hack this creative process by building my own tools, misusing software, and generally operating in a way that’s contrary to what is considered normal.

Get your tickets now!

CopyrightX

From 26th January till 30th April I’m going to be taking part in the online sections of CopyrightX.

copyrightx

CopyrightX is a twelve-week networked course, offered from January to May each year under the auspices of Harvard Law School, the HarvardX distance-learning initiative, and the Berkman Center for Internet and Society. The course explores the current law of copyright; the impact of that law on art, entertainment, and industry; and the ongoing debates concerning how the law should be reformed. Through a combination of recorded lectures, assigned readings, weekly seminars, live interactive webcasts, and online discussions, participants in the course examine and assess the ways in which the copyright system seeks to stimulate and regulate creative expression.

I’m really happy to have been accepted onto this course. By taking the course I want to increase my understanding of copyright law – UK and US law – and how it affects the way artists create and share their work. I hope all that I learn will feed into my general artistic practice and my residency at the University of Birmingham.

Although the places on the course have now all been allocated, the lectures, reading materials, maps, and recordings that have been developed for the course are available for use by anyone and are licenced under a Creative Commons Licence.

Generic Conference Man

On Tuesday 20th January I was in Hull for Digital Utopias. I was originally going as a regular attendee but was invited at the last minute by Alex McLean to take part in A Yorkshire Hack, which was an informal relaxed space for hacking. I was in a hackathon with him a couple of weeks earlier at Hack the City, so jumped at the chance to do some more!

Like Hack The City there wasn’t really any agenda or goals for the day. We had access to a bunch of data to play with but I wanted to make something that related to the day itself. Upon arriving I looked for opportunities to work with the event. Could I access data about attendees? Were there any devices in the venue that I could re-appropriate? Was there a 3D printer or 3D scanner that I could use to recreate objects or attendees? I also thought about hijacking the #artsdigital hashtag but was quickly shot down by the Arts Council England Twitter account (nice work!).

Triggered by a tweet by Hannah Nicklin I was reminded of the Bullshit Bingo cards that Rosa Menkman made for Transmediale 2014. I hastily began to make my own versions, featuring many of the buzzwordss that are often spouted at tech and art conferences.

buzzwords

These were well received, so I knew I was on to something good! Rather than just create bingo cards – Becky Stewart already made a script for that – I talked with the other hackers to devise ways of taking it further. In collaboration with Shelly Knotts and Alex De Little, we looked at ways to retrieve and present these buzzwords. We explored using sonification of tweets, doing data visualisations, tag clouds (so 2000-and-late!) and more. Later that day, after many experiments, Generic Conference Man was born!

Generic Conference Man

Why sit through hour-long presentations about the latest innovative disruptive wearable tech when you can just watch someone spout those all-important buzzwords instead!

Generic Conference Man works by first grabbing the most recent tweets from the #artsdigital hashtag. The most common words are then extracted, with words that appear only once being discarded. Shelly Knotts accomplished this using a combination of Twurl and SuperCollider. This list is then fed into the simple lip-sync animations script by Silas S. Brown. The illustration was grabbed from Open Clip Art Library.

As you can tell from the above video, at the time the tweets were grabbed a session with Ruth Catlow of Furtherfield was taking place and so they were talked about a lot on Twitter. If you’re interested you can see the complete list of words here.

If this were to be developed further it would likely turn into a website that would deliver/speak the tweets live instead of on a prerendered video. URLs, usernames and other generic words would also be discarded. However, as a quick hack we made on the day I think it makes its point.

One more thing

Our thoughts going into this was to make a light-hearted critique of the way individuals, corporations and institutions talk about digital art and technological developments, but without targeting specific individuals. Of course, in order to talk about things such as wearable technology one has to actually say those words, but quite often this descends into buzzword-laden hyperbole that undermines the art form, accomplishments of the industry and the audience/viewers. Rosa Menkman had this to say about the Bullshit Bingo cards, which I think is quite relevant:

During every festival I visit, I see keywords getting overused and oversaturated. Specific words, sometimes initially undefined or in dire need for re-definition (trashure, mediatic, mcluminations, othernet, afterglow, post, etc) are used so often, and in so many contexts that they lose any kind of significance. These words become omnipresent memes within a festival-discourse bubble. But for festivals such as Transmediale, that dictate quite a bit of the discourse of the contemporary media arts, this power is under exposed and under criticized.

I spent my time at Digital Utopias hacking, so was unable to attend many of the presentations, and so perhaps many of the presentations weren’t a barrage of buzzwords. However, when we showed Generic Conference Man to attendees it seemed to resonate with them, which could be a reflection on the conference itself or their experience of previous conferences. Judging by the impressive range of speakers on the day I hope it’s the latter.

A decade of phone photos

Ever since I was given my first smartphone in 2004, a Nokia 6600, I’ve always made a point to take lots of photos. Some are mundane, accidental and otherwise uninteresting. Others document key moments in my life. Many of the photos go unpublished until the end of the year when I make a grid of photos.

Although I’ve nearly always owned or had access to good quality compact and SLR cameras, for me the immediacy afforded by having a camera in my pocket led to photos that were more spontaneous and better captured and represented my life as it was happening.

The quality of the photos was, at first, no rival for analog photography or even digital compact cameras, but, for me, the graininess and blurriness is what I was after. For me, getting good photos required better equipment, waiting for the perfect situation and sometimes using better equipment – i.e. larger cameras – which would take me out of the moment more.

As technology has evolved the quality of photos taken with smartphones has become indistinguishable from compact cameras, to the extent that compact cameras are being left at home. Increased storage space means I need not worry about how many photos I’m taking, but it does mean spending an increasing amount of time selecting and editing photos.

2014’s photo grid marks 10 years that I have been making them and also marks the end of this accidental project. I’ll still continue to take photos but now, like many people, the destination for them will probably be Flickr, Facebook, Twitter, Tumblr, Instagram and whatever other social networks there are. Anyhow, here are all the collages in chronological order:

2004

2004 on my phone

2005

2005 on my phone

2006

2006 on my phone

2007

2007 on my phone

2008

2008 collage

2009

2009 on my phone

2010

2010 on my phone

2011

2011 on my phone

2012

2012 on my phone

2013

2013 on my phone

2014

2014 on my phone

Posted in Art

Emojify all the things

There’s no doubt that emoji is here to stay and will infiltrate your artwork, desktop, phone screens and inboxes if it hasn’t already done so. In a similar vein to ASCII art, recently apps have been released to convert pixels in images and video to emoji. Emoji Video and Emojify are two iOS apps that can convert content to emoji, with the former appearing to be able to do this in realtime with video.

In a time before emoji two popular libraries existed to do the same thing, only using text and colour blocks (y’know, ASCII). AAlib and libcaca are two popular open source libraries that have been used extensively.

dramaticcaca

Although the two aforementioned emojifying apps work really well, unfortunately there are not yet any open source libraries available to achieve the same effect. Until one is built I took it upon myself to spend a few hours making something that uses Imagemagick and the Twitter emoji set. It’s not nearly as efficient as the emojifying apps or libcaca/libaa, and cannot be used on live video, but as a short experiment I think it works nicely.

The script works by using symbol patterns for dithering. This process uses the frames in an animated gif to replace blocks of colour. As shown in the Imagemagick example any gif can be used. The first step to using the script finding an emoji icon set. The Twitter emoji set is really good and is released under a Creative Commons licence, but feel free to use whatever you want. Download this to your computer.

As mentioned before, this dithering method makes use of the frames from an animated gif. For true emojification all of the emjoi in the set could be converted into one gif, but that would result in a loss of colour, a huge file size and possibly epic processing times! For that reason I decided to pick six random emoji each time the script was run. With each element in place I now just executed the script. You’ll need to modify line three to point to the directory containing the emoji set.

Cat Eye emojified
Original

Freudenberg sg Switzerland emojified
Original

Ipomoea aquatica flower emojified
Original

Studio portrait emojified
Original

Not bad for a few hours of work!

If you’re starting to think that you’ve seen this aesthetic in my work before then you would be right. I have previously used this technique, instead using some randomly generated symbols, for the CóRM image set and some t-shirt/logo designs for NESkimos that I think were never used.

If anyone every creates an open source library for emojifying things I’d be happy to know about it 🙂

g12

Birmingham Show – 31st January – 11th April

From 31st January to 11th April Glass will be screened at the Birmingham Show at Eastside Projects alongside work by 34 artists working/that have worked in Birmingham.

birminghamshow_esp

Birmingham Show is an exhibition as history and not history, connecting gaps, distances and potentials of artists who have lived, worked or studied within the city. Three key questions underpin the exhibition making – ‘What is the art of Birmingham?’ ‘Is there an accent to Birmingham’s art making?’ and ‘How is Birmingham useful for the production of art?’

Eastside Projects’ intention is not to create an authoritive survey, but to initiate conversations and to think again about our city as a place that produces and supports artists in many different ways. By displaying a set of works that wouldn’t otherwise be experienced together, we hope to make visible co-existing and overlapping objects, processes, politics, relationships and scenes emanating from Birmingham.

‘Birmingham Show’ continues a series of group exhibitions and productions within Eastside Projects that examine functions and modes of art and the construction of a public sphere. The series started with ‘This is the Gallery and the Gallery is Many Things’ in 2008, followed by ‘Sculpture Show’ and ‘Abstract Cabinet Show’ in 2009, ‘Curtain Show’ and ‘Book Show’ in 2010, ‘Narrative Show’ in 2011, ‘Painting Show’ in 2012, ‘Puppet Show’ in 2013, and ‘Trade Show’ in 2014. Each project invites new curatorial and artistic voices to effect change upon the existing conditions of Eastside Projects and aims to impact on artist practice further afield.

Gifs in Pure Data

Every so often on my travels across the information superhighway I come across a Pure Data user asking if animated gif files can be read in Pure Data. Technically speaking they have always been able to be read in Pure Data, but not always in a way that a user usually wants. Using the [pix_image] object a user can read almost any image file format. On Linux this is dependent on ImageMagick, so whatever it can read can (theorectically) be displayed in Pure Data/GEM. The problem arises because [pix_image] doesn’t display animated gifs as animations, only the first frame.

There are several solutions to this problem. For these examples I’m going to use the following two gifs:

box

frame

Click through each image to get the full-sized original versions.

[pix_multiimage]

If you separate the gif into its individual frames you can use [pix_multiimage] to display each frame in succession.

multiimage
Click to download the PD patch.

Benefits

The benefits of using [pix_multiimage] to simulate an animated gif are that you can display high quality images with an alpha channel at whatever frame rate you choose. Simulating stutter effects or reversing is as easy as using a [counter] or random number generator.

Drawbacks

The problems with this approach are that [pix_multiimage] needs to be told how many frames to cycle through, and not all gif animations have the same amount of frames. [pix_image] and even [pix_data] do not report information about the amount of frames in an animation, so that value cannot be passed to [pix_multiimage]. Assuming that you separate your gifs to their individual frames, an abstraction can be built that can detect how many images there are in a directory and then send that value to [pix_multiimage] but that is a lot of effort to go through!

Convert gif to video

The technique that perhaps most PD users have used is to convert the gif into a video file and use [pix_film] to play it. I used the following script to convert a folder full of gifs into mp4 files, with all transparent pixels converted to green pixels:

With the gif now converted to a video you can use [pix_film] to play a video as you normally would.

gifchroma
Click to download the PD patch.

Benefits

So far I have only tested playing animated gifs in Pure Data using Gmerlin on Ubuntu. Without knowing if the same would work on Windows or Mac OSX, using video files is the safest option for all users.

Drawbacks

Any sort of file conversion will reduce the quality of the output, and this method is no exception. The videos aren’t very sharp, especially at the borders of the green pixels.

Making the green pixels transparent using [pix_chromakey] or [pix_alpha] requires fine-tuning to ensure that other colours aren’t made transparent. This isn’t always 100% reliable and can have a few glitchy artifacts.

Using gifs directly with [pix_film]

Another approach is to use [pix_film]. “Hold on” I hear you say, “[pix_film] can only be used to play films! How dare you suggest that it can be used to play image file formats. Balderdash!”. Well, don’t beleive the hype! As a Linux user, I can only comment on this working on Linux. If anyone can get the following methods to work in any other OS please get in touch and I’ll add it here.

When you play media file formats in Pure Data on Linux you’re actually using external programs and libraries to play them. So, you’ll use ffmpeg/libav to play videos and Imagemagick to display images. There’s also another program you can use, Gmerlin. Install it by executing sudo apt-get install gmerlin. Pure Data/GEM has some weird behaviour whereby the delay amount of a gif needs to be explicitly set to a value 1 or above in order for an animated gif to be played. This can be achieved on a folder full of gifs by executing mogrify -delay 1 *.gif.

And now you can easily open an animated gif in Pure Data the same way you would a video file.

gifvideo
Click to download the PD patch.

Benefits

Gifs, unlike (most) video file formats can have an alpha channel. Another benefit is that you don’t need to deal with converting files. No longer will you have to worry about whether an mp4 is faster or more effecient than an mp4, or what codec to use. Gifs will just be gifs.

Drawbacks

If the original format of your source file is a gif, then perhaps it is more efficient to keep it as a gif. If it was a video file, would it be beneficial to convert it to a gif? Not always. Even if you could achieve a smaller file size or have PD use less processor power by using a gif, the quality of the video output would be reduced due to gifs only allowing 256 colours.

It’s pronounced “gifs”

There are perhaps other benefits and drawbacks to each approach that I haven’t written about or haven’t even thought about. One such example of both is processor usage of each method. I suspect using gifs is actually less efficient, but I don’t have a good method of testing this. Perhaps one of y’all could!

pdroll