Archive for the 'The Installation Term 2' Category

23
Feb
08

Installation idea number 4……..

After some experimentation I feel that the sense of field of one ultrasonic sensor is not going to be wide enough to reliably control the work.  With this in mind I have also been thinking about how to use a space and its dimensions to my advantage.

This idea uses two ultrasonic sensors in a space that is 3 metres long, and 1 metre wide (enough room to comfortably fit one person) the work is shown on a 20” (or so) TFT monitor set at the furthest point away from the entrance.  The sensors are mounted opposite the screen (my other ideas have placed the sensor either above or below the projection) I’ve found that the sensors work best when they are placed at upper body height, so it is not possible to have them on the same wall as the work for their positioning would be quite obtrusive, also placing them on the opposite wall means that they will distract less from the work, keeping the technology hidden as much as possible.

 idea8.jpg

I want this to be an investigative environment, I feel by using a relatively small TFT as opposed to a large projection the viewer will be more inclined to approach the screen therefore walking the length of the installation, this will mean that the sensors can use their optimum range to maximum affect and hopefully the space will be more successful for it. 

When there is no presence in the room a video plays at un-identifiable speed.  When a person enters the space the video will slow down and change.  The two sensors will control two separate videos composited on top of one another.  They will also handle separate audio. 

Using your movement you will have the ability to stop and start the audio and two videos either independently or simultaneously (depending on your distance and position within the room) 

For me this process has some interesting connotations, firstly, by having an element of control over when the video is played or stopped gives the viewer the opportunity to inspect the footage and see imagery within it that would have otherwise been lost.  This relates to how we experience the non-space at speed preventing us from ever truly identifying with it. 

And secondly, with each individual having an influence over whether the videos are playing or are stopped means that the composite will progressively change. By the end of the show you would have an alternative edit or mix, a mix that has been shaped by the presence of the people that chose to inspect it.  It would have shaped its own identity.       

Advertisements
25
Jan
08

keep it simple……

I had a tutorial with Andy last week in which we spoke for part of it about the relationship between the technology behind the work and the work itself.  

Andy’s opinion was that the work (visually) should always come first. He felt that the technology shouldn’t be at the forefront of the audiences gaze, it should not be the first thing they notice, quite the contrary, they should not notice it at all, it should blend into the background and become a part of the experience, and not be all of the experience.

He then he said those wise old words of ‘keep it simple, the simplest ideas are always the best’.

After this discussion, and looking back at my blog at some of the preliminary ideas I have come up with to date, they now look quite cumbersome and ugly.  They place the technology at the forefront of the audiences gaze and I couldn’t agree more that the work itself could become lost within these (physically structural) ideas.

So with all that in mind, here is a new idea that is clean and simple.  My project focuses on the non-place, and how our movement within this space shapes and defines it, our presence is its identity.  It seems obvious that movement has to be an important variable in the final installation, and in a way shape how it is experienced.

ge.jpg

Andy and I spoke about how it would be interesting to have an installation that reacted and transformed itself only when there was a presence in the room.  When there is no movement it remains an un-definable entity, but when it senses a presence, your movement through this space it will begin to reveal its true identity to you.

With this diagram the sensor is placed so that it bounces back an ultrasonic echo the distance between it and the entrance to the installation.  If you are walking past or standing outside the room the visuals would remain unidentifiable (either they would be completely static or maybe even moving at great speed so they could not be recognised) If you choose to enter the space the ultrasonic sensor would pick up your presence and change the visuals and become something that you can begin to understand.

We also spoke about how sound could be used, maybe having no visuals and only abstracted sound, and when you enter the space the visuals start up and complete the puzzle?  But this is all for a later date, for now I need to concentrate on getting a prototype up and running with an ultrasonic sensor, then I can settle down to the nitty gritty of developing the visuals. 

Its back to Pure Data…….. (deep breath)

16
Jan
08

Installation idea number 3…….

This is the same as installation idea number 2, yet instead of using an ultrasonic sensor, it uses pressure pads instead to ‘feel’ the presence of the viewer an slow down or speed up the video accordingly.     pressure3.jpg   

I’ve had a look at FXscript in Final Cut Pro but it seems that this application is best suited to creating new filters and transitions and not manipulating video (well, not in the way that I intend to anyway) I’m getting dragged down and down into a coding nightmare!!

Every program I have looked at thus far is just not suitable for this project. I know I have very little knowledge of this but I can get an idea of whether a program is going to be able to do what I need to do pretty quickly (by looking at the manuals, playing around with a few functions and seeing how other people have used the application with example on the net and so forth) 

With that all in mind (and my mind becoming quite disheartened with this whole affair) I am going to tackle the beast that is Pure Data.  I know this application can do what I need it to do and people tell me that when you get into it is not all that scary.  

So, here we go……..  

 

16
Jan
08

installation idea number 2…….

This idea revolves around using an ultrasonic sensor to judge the position of viewer in relation to the projection, and depending on this distance, either slow down or speed up the projection accordingly.

From what I understand an ultrasonic sensor works by transmitting a short burst of ultrasonic sound towards a target which then reflects the sound back to the sensor, it measures the time it takes for the echo to return to the sensor and computes the distance between it and the target.

I would intend to build a frame to house the sensor and a v-shaped construction that complemented the sensors field of vision.  This way the viewer, when within the installation would not be able to stand outside the sensors field of vision therefore always affecting the visuals.  

The housing for the sensor would act as a barrier between the viewer and the projection ensuring that the closest point the viewer could reach would still be an acceptable distance from which to view the work.

                                                              3d_view.jpg  

The image below is a birds-eye view of the installation.  It aims to give a rough idea what speed the projection would play back at when the viewer is standing in one the four zones.

                                                              above2.jpg 

 To use the ultrasonic sensor I would have to use an Arduino running alongside another application that I have yet to decide upon.  Tim suggested checking out FXscript in Final Cut Pro and there is obviously the program Pure Data.

 This is all so rough at the moment, and also very confusing for me as I have never taken on anything quite like this before, but I hope as my knowledge grows, and I work out exactly how to achieve this I will be able to write more fluently in my blog about what it is exactly that I am doing.  As it stands I feel like I’m just waving my hands around in the dark hoping to bash into something that I can get my little brain around and use successfully.  Its getting tough now……… and time is of the essence. 

 

14
Jan
08

installation idea number 1……..

I’ve been thinking about the technicalities of how I propose to show my work at the end of year exhibition.  With this post I don’t want to write about what the actual visuals will be, I just want to focus on how it may be shown and the problems I may encounter with this setup.

As movement is a prominent factor with the theoretical side of my project, I want my final piece to utilise movement, more specifically the movement of the audience, as a central point for this projects intentions.  To utilise this I need to employ some kind of motion sensor technology so that the audience can interact with the video. 

I want to offer the audience an opportunity to use their presence to alter the speed of the video footage, to slow it down so to speak so they can then inspect the footage more closely (as I said I’m not going to go into theoretically ‘why’ I want to achieve this, I’ll save that for another post!) 

Tim suggested to me that the program Flash has the capability to use a webcam as a motion sensor, so with that in mind I have created this blueprint.

                idea123.jpg  

The footage would be 3 films shown simultaneously (using one projector, the films would be separated in the editing process and shown as one video stream) 

I would then need to separate the webcams field of vision into 3 separate parts so that the films could be interacted with individually (I could use one projector for each film and have a separate sensor for each part, but to me the ergonomics of this seem a little impractical, I’d rather try and keep the amount of equipment I need to a minimum) 

Outside the webcams field of vision the films would all play at 2000% When there is movement detected in any of the three zones the video would slow down to 10%, either in zone 1 for video 1, and the same for videos 2 and 3 respectively.  This would allow the viewer to interact with the 3 videos individually.

I have obtained a copy of Flash and am in the process of running a few experiments, I really want to see whether the program can handle the amount of data I intend to pass through it in real time, I need this to be as solid as possible as the last thing I want is for the program to constantly crash and need to be reset during the actual show.

With no knowledge of coding or motion sensors etc this seems like a daunting task, I hope to employ the expertise of Tim and Leon to try and learn as much as I need to know because I have a very short space of time in which to learn and create this.  But hey! Always up for a challenge…