Archive for May, 2008


Widetime/Trails vs Frame blending…

I’ve been experimenting with a couple of different ways to display the footage that will play when the sensor is triggered.  Below is an example of footage that uses two filters to effect the footage.  These filters are “Widetime” and “Trails”.  I have composited the two layers on top of one another, each using the same footage and with each using one the effects respectively. The result is a more abstract look when the lights of the cars pass in front and behind the flowers.



The second example just uses frame blending (both videos are slowed down to 8%) This results in the colours of cars lights being more vivid, and less abstracted.   


Both results I like for different reasons:  I like the first video for the fact that the flowers are less obstructed by the cars lights passing, and as this is the focus for the piece, it is important that this element of the work remains visible.  Yet I like the second video for the vivid colours of the lights, it, to me, makes for a more striking piece of visual imagery.  I have tried to mimic the colours in video two using the effects in video one but to no avail, the effects when applied just don’t seem to adapt well to this more vivid colour scheme, and as I have been forcing the colours using more filters the results have ended up looking overtly processed.  So it has come down to this, to Widetime/Trail or not to Widetime/Trail…. that is the question.    


Audio for Final show piece…

I have been experimenting with audio in preparation for my final piece.  The audio is important because it contributes largely to the overall tone of the piece, and in turn this will affect how it is received.  It is also another sensory tool that digital artists have the benefit of utilising.  

The first example is the audio that will play when the sensor is in its idle position, i.e. when no presence is affecting it.  The visuals will be presented in acceleration, personifying how we conventionally experience the motorway, and also our modern world.  I want for it to sound repetitive, ominous and uninviting.  I want it to represent the overwhelming scale of this space.  I experimented and manipulated audio I had captured when filming from the top of relatively desolate bypasses, capturing the sounds of the sporadic cars accelerating between horizons.     

The second example is what will play when the sensor is triggered, and the visuals change to something more identifiable.  I wanted for this audio to scale down the piece and for it to incite a more reflective state.  This audio was originally sounds I had captured when filming a traffic jam at a busy intersection.  The manipulated sound is the noise of the cars horns slowed down and effected by a delay filter.  There is a loop to it that in some ways is quite child like, and for this it incites a familiar feeling, one of time and memory.  I hope for this audio (together with the video) to affect the viewer in this way, not to provoke any kind a reminiscence but more to create a feeling that subconsciously we can relate too. 



Dominique Gonzalez-Foerster…

Dominique Gonzalez-Foersters work invites the viewer to virtually adapt the works to suit themselves, inevitably supplementing them with their own emotions and idea when approaching them.  Gonzalez-Foerster formulates and tests various vocabularies of the imagination and memory.  In her atmospheric environments she creates links to real or invented stories that speak of the relationship between individuals and their environment, and the fluidity of this relationship in impressions, memories, projections and dreams.  The works become fragmentary narratives created from colour, personal references and precisely composed passages of emptiness.  Gonzalez-Foerster is interested in the way in which spaces reflect people’s stories, obsessions and wishes.  The works thus become transitory places that oscillate between inner intellectual spaces, and those outside: beyond our body.

In Séance de Shadow II (1998), the visitor is aware of a blue glow emanating from a corridor-like space. Entering the room triggers a series of bright lights, casting shadows of bodies and objects onto a blue-painted wall.

Gonzalez-Foerster is interested in subjective experience, and the ways in which emotions and physical sensations are translated into visual form. Here the work is a living surface in which the movement of visitors is transformed into a theatre of performing shadows, making us aware of our bodies, and making us conscious of ourselves as actors as much as viewers.

Gonzalez-Foersters work has parallel’s with my own practice.  She too is interested in the ways in which we associate with space, how we formulate our perceptions of space and how this experience becomes fragmented by time and memory.


to fade in and fade out….

With regards to the programming behind my final piece I was not happy with the way in which the two videos switched when the sensor was triggered.  The change was very abrupt as the videos quite literally flipped, with no fluidity, so I set about working out how to program my patch to blend the two videos when a presence was sensed.

I had already implemented a horizontal slider which switched from left to right when the sensor triggered, far left being the video that plays when no presence is sensed  and far right being the video that plays when a presence is sensed.  A bang message is sent to the slider when the sensor is triggered thus changing the visuals (a bang message is also sent when the sensor returns to its idle position flipping the source back into its original state)

When the sliders properties were set to default (left = 0, right = 127) there was no blending affect.  With a little research on the Pure Data forums I found that I needed to change the values to left 0.01 and right 1, with these new values a blending affect was achieved, albeit when I dragged the slider by hand.  What I needed now was to program the slider to move automatically when the sensor was triggered.

As Pure Data is just numbers I needed a counter send messages to the slider telling it what value to use at specific points in time (for example, if the values were left = 1 right = 10, I would need a counter to send the slider the number 1 then 2, 3, 4, 5, 6, 7, 8, 9, 10, this would move the slider from left to right as each of the numbers were received) 

The way I found to do this was to set a metro (metros send bang messages at a rate in milliseconds, so if you set the metro at 500, it will send a bang every half a second) So in theory I wanted to counter to start at 0.01 (far left) and for the metro to send a new number incrementally every 25 milliseconds until it reached 1 (far right) unfortunately this was not so straight forward.  For some reason the even though the counter was counting the correct values, the slider was not receiving them and therefore it was not moving in a fluid manner, quite the contrary actually, the slider jumped form far left to far right, with the correct delay that I had set in total (0.25 of a second) but ignoring the values in-between.  Obviously this was not what I wanted as the videos still abruptly flipped two and from one another.  I tested the counter with the default slider values and adjusted the math accordingly and the slider smoothly drifted from left to right!? I couldn’t use these values because they did not achieve the fade a was after.  By now I was getting confused, the maths were the same (in principal) but the affect was the opposite.

I decided to average out the numbers using an average object.  The average object (as the name implies) averages the numbers between two variables by a number that you set (for example, if left = 1 and right = 10 and I set the average object to 20, it would divide 10 into 20 and count accordingly, i.e. it would count 0.5, 1, 1.5, 2 and so on until it reached 10)  I thought that as the difference between the numbers I needed to use (0.01 – 1) was so small (in comparison to 1 – 127) the counter slider was ignoring the decimals in-between and only acknowledging the whole numbers (0 -1)  I used the average object to divide the difference between 0.01 and 1 by 50 and told the slider to listen to that object over everything else, and it worked! kind of…

By averaging the numbers and programming the slider in this way I achieved a smooth transition from left to right, but another problem arose.  The average object needed to have a reset message sent to it after everytime it was triggered, otherwise it wouldn’t work again.  To get around this I outputted the sliders values into numbers and used a moses object.  The moses object sends a bang message when ever a value that you set is reached.  I set the moses object to send a bang to the reset message whenever it received the value of 1, this reset the average object and thus the process could be repeated again.  The only thing left to do was to reverse this entire process so as the videos would fade out as well as fade in and Volia!  it was done…… (phew)  The image below show the final counter patch.  


DVD Team Update…

We have just received the go ahead on the final design and colour specifications for the DVD.  They are as follows:


Red – 226 1 39

Blue 12 103 178


Blue 91 61 0 0

Red 0 99 89 0

The DVD will now be distributed at the time of show, and not post show as previously discussed.  This obviously gives us less time but now we have the images we can start to put together a visual structure.  

The DVD will contain MA course information, each of which will showcase individual students work compiled in a showreel type format.

It will also contain interviews with invited artists (whoever publicity manage to acquire?) and an information regarding on the history of Camberwell.  I think we have access to a Canon Z1 in which to film the interviews.  We also have a new member in Marianne Owji has volunteered to be a part of the DVD group, and she’ll be handling this history of Camberwell and helping with interviews as well.

Now we have this platform I think its important we meet this week and get this properly going.  Time has a tendency to disappear very quickly around here…… 



MADA 3: Colloquium. Video Presentation…


refined final show piece = refined pure data patch…

With the revision of my final show piece I have also revised my Pure Data patch.  

This Patch controls two videos and two audio samples, one pair play when there is no presence in the space and the other pair play when the sensor is triggered.  The key difference with this patch is that it only uses one sensor.

Now, I know I said that I wanted to use two sensors in order to widen the sense of field and to be able to control two video/audio(s) independently, but with my refined setup this is just overkill, and I kind of like that you have to find, or stumble across this ‘sweet spot’ in which to trigger the change in video/audio, I feel it may give the work a new dimension, where the experience may differ from viewer to viewer.

Heres the refined patch below.  I’ve used an abstraction under ‘sensor-patch’ to hide a lot of the technical side of the sensors programming.  These are the objects that just need to be in place, and in the right order to get it working, once they are in place no more adjustments to this part of the patch need to be made…. (fingers crossed!)

In a way I’ve gone full circle with the Pure Data side of this project, returning to a patch that is a lot simpler, yet also a more stable.  I have learnt this program (with a little help from my friends!) and built this patch from scratch taking it far beyond what is now necessary.  But this process has been quite important because now I have a very solid understanding of how this patch functions, and that can only bode me well when it comes to setting up this piece, and problem solving any gremlins that may rear their ugly heads!