The KHRONOS PROJECTOR

[a video time-warping machine with a tangible deformable screen]

 

by Alvaro Cassinelli

Ishikawa-Komuro Lab - The University of Tokyo

Department of Information Physics and Computing /Graduate School of Information Science and Technology.

 


 

 

VIDEO, SLIDES and DEMO APPLET HERE

 

·    Short Demos toghether [wmv: 25MB]

·    Description [wmv: HQ (40MB) / LQ (8MB) ] / [ mov: HQ (72MB) / LQ (10MB)]

·    Description and demos [wmv: 178MB]

·    Slide Presentation [PPT: 10MB]

·    DEMO APPLETS (Java/Processing)

·    More:

o      IMAGES & VIDEO page

o      TIMESCAPE installation (with live screen)

o      Coarse-Mode or "cellular-video"

    1. What, Why and How
    2. Installation Setup
    3. Video Content
    4. Demos (video & snapshots)
    5. Online Demo (Java Applet)
    6. Conclusions and Remarks
    7. Future Works
    8. Acknowledgments
    9. Exhibition History
    10. Contact
    11. References

 


 

 

What?

 

The Khronos Projector is an interactive-art installation allowing people to explore pre-recorded movie content in an entirely new way. A classic video-tape allows a simple control of the reproducing process (stop, backward, forward, and elementary control on the reproduction speed). Modern digital players add little more than the possibility to perform random temporal jumps between image frames.

The goal of the Khronos Projector is to go beyond these forms of exclusive temporal control, by giving the user an entirely new dimension to play with: by touching the projection screen, the user is able to send parts of the image forward or backwards in time. By actually touching a deformable projection screen, shaking it or curling it, separate "islands of time" as well as "temporal waves" are created within the visible frame. This is done by interactively reshaping a two-dimensional spatio-temporal surface that "cuts" the spatio-temporal volume of data generated by a movie.

Click on image to lauch video [WMV, 7MB]

 

Click on image to launch video [MOV, 884KB]

There are many ways to visually explore the spatio-temporal volume generated by a movie; most of them result from cutting the spatio-temporal volume by a two-dimensional surface. The usual way of visualizing video content, consist on showing consecutively each image of the sequence. In other words, the intersecting surface is a plane that remains always perpendicular to the time arrow. Now, for instance, by intersecting the spatio-temporal volume by planes that are not perpendicular with the time axis, the resulting image will show a spatio-temporal gradient. Of course, there have been already many works and art-works that develop on such ideas (see for instance [2, 3, 4 and 5], or check this website by Golan Levin for a more complete, chronologically ordered and up-to-date "informal catalog" on slit-scan artworks [13]). See also [1] for an interesting short essay on the representation of Time and Space in western classic, modern and contemporary art. The essay also includes some references to the latest interactive installations dealing with the subject.

Now, to the best of my knowledge, the Khronos Projector is the first Art-Installation enabling the interactive shaping of an arbitrarily complex spatio-temporal cutting surface thanks to a dedicated tangible and "sensual" human-machine interface, and thus giving the user a strong feeling of being actually sculpting the space-time "substance" with its own hands.

 

Why?

 

When contemplating a still image or a motionless sculpture, we are free to direct our sight wherever we want over the whole work - perhaps only secretly compelled by the compositional forces the author has succeeded instilling in it. This is barely possible in a movie - we are forced to adopt a point of view in space and time. Using the power of computers, we can free ourselves from this constraint.

 

The Khronos projector unties time and space in a pre-recorded movie sequence, opening the door for an infinite number of interactive visualizations. Using the Khronos projector, event's causality become relative to the spatial path we decide to walk on the image, allowing for a multiple interpretation of the recorded facts. In this sense, the Khronos projector can be seen as an exploratory interface that transforms a movie sequence into a spatio-temporal sculpture for people to explore at their own pace and will. Or a machine to produce instant-cubist imagery out of ordinary video footage.

From the technical point of view, this work expands on the work I've been doing at the Ishikawa-Namiki Lab concerning human-machine interfaces using laser-based active tracking and dedicated, real-time image-processing vision circuits [8]

. The feedback I received during the presentation of a prototype laser-based tracking system at SIGGRAPH 2004 [9] convinced me of the necessity to develop a tangible human-machine interface capable of interpreting touch - and if possible, capable of sensing even the delicacy of a caress - while at the same time able to react in a subtle and natural way, also through tactile feedback. The Khronos-Projector tissue-based deformable screen is a first step in that direction.

 

Click on image to launch video [WMV, 4MB]

 

Click on image to lauch video [WMV- 7.3MB]

How?

 

The spatio-temporal fusion algorithm is the core of the Khronos program: it consist on blending hundreds of images from a movie sequence to produce a unique displayed image. The blending operation is controlled by a "spatio-temporal filter", or "spatio-temporal cutting surface", which is a two-dimensional surface lying inside the "spatio-temporal volume" of the movie. The fused image is the union of all the intersections of the spatio-temporal surface with each image in the sequence.

 

The program (extremely computationally hungry) was prototyped using Matlab, and finally coded in C++ using OpenGL. The present version can display the dynamic blended image both in 2D but also in 3D by mapping the image on a NURBS-generated surface representing the actual time/pressure map. In the most basic embedding for the proposed installation, the spatio-temporal filter is shaped interactively using the mouse, a graphic tablet or a touch-screen LCD display. A physically-based model of a membrane is then used to compute in real time a realistic shape for the spatio-temporal filter.

 

In a more sophisticated embedding (see image above), a real deformable screen made from a thin elastic fabric is scanned using infrared light and a dedicated Vision Chip. The resulting human-computer interface delivers position and pressure information in real time, and besides, it can also be controlled by light (more below).


Installation Setup

 

The installation uses a video projector, a large tissue-based deformable projection screen, and a sensing mechanism capable of acquiring in real time the deformation of this tissue. The Khronos projector waits for its public: in the absence of any tactile input, the projector displays the first image of the whole sequence. Driven by curiosity, the public eventually approach and touch the screen surface, thus triggering the interactive experience. The behavior of the spatio-temporal cutting surface can be tuned in many different ways (in particular the way the surface reacts to a local "punch", the way it spreads the spatio-temporal deformation and the way the surface evolve and relax). An interesting point about this installation proposal is that people would be able to try the Khronos Projector on videos they could bring themselves. A color printer may be brought nearby in order to print high-resolution snapshots of the interactive video.

 

a. Simple setup [ tablet / touch-screen ]

The Khronos Projector software has been extensively tested on a standard monitor using a graphic tablet, as well as with a touch-screen monitor. The screen is rigid, but by avoiding a graphic tablet and having instead direct interaction with the touch-screen monitor surface, impressive enough results are obtained (note: most touch screens does not sense multiple levels of pressure, so that the interface capabilities are somehow reduced). The tactile screen is a five wire resistive touch screen which only senses two-levels or pressure. Using a graphic tablet (any compatible with the open standard Wintab library) it is possible to capture multiple levels of pressure, enabling precise control of the "temporal pressure"; however, this approach somehow reduces the feeling of "immersiveness". (A pressure-sensitive touch-screen would be the ideal hardware interface, but it seems that the only available on the market need a dedicated input stylus.)

 

b. Dedicated hardware interface [ deformable screen / Vision Chip ]

 

Click on image to launch video [WMV-4MB]

In this embedding, the spatio-temporal blended images are projected on a deformable screen, whose deformation is scanned in order to compute the "temporal-pressure" map.

 

The projection screen is made intentionally large so that people will be unable to interact with the whole image at the same time: it will be impossible to "push" on the whole image in order to synchronously project it forward in time. With this configuration, the ubiquitous time arrow is defacto broken. The public would need to either be helped by someone else, or to move itself a little distance to "push" another part of the image and see what happens - thus unveiling causality in an indirect way relying on memory and deduction. I believe that this is a nice spacetime-transposed metaphor of what actually happens in a conventional movie theater: we have to wait (i.e., walk in time) in order to create understanding.

Click on image to launch video [WMV-2MB]

 

A screen made from a subtle and almost organic elastic fabric should give just enough sensuality to the installation so as to call for the touch - then triggering the spatio-temporal fusion and making the images alive. I believe this to be the most elegant and compelling embedding for the Khronos Projector Interactive-Art installation. Using relatively cheap spandex fabric as a projection screen, it is indeed possible to build very large (several meters wide) screens. Images are back-projected by a standard LCD projector. People standing in front of the screen would see the whole image and will be able to physically interact with it by touching it (with their hands and feet, a stick, and even by throwing things at it - like small pebbles that will generate water-like temporal ripples - click on images to launch video). The actual content to be "Khronos-projected" may be a unique sequence to be shown during the whole duration of the exhibit, or conversely, tens of sequences from a demo-database, to be shown in a round-robin fashion (one every hour or so).

 

Click on image to launch video [WMV-9MB]

However, given the particularly large screen, it may be more interesting to project a unique, simple sequence specially tailored to be "analyzed" using the Khronos projector, playing with causality and temporal paradoxes in a suggestive way.

This can be for instance a short story with two or three actors; an in-camera short skit, that resolves in less than a minute and that people have to understand and reconstruct by mentally solving a spatio-temporal puzzle.

Click on image to launch video [WMV-6MB]

 

Time-lapse photographic sequences may also be an appropriate subject for a large screen. For instance, a sequence capturing a sunset over the sea. The installation booth can be (minimalistically) decorated so as to suggest the sea shore (sand on the floor, a bucket of pebbles, water). A sea-shore sound track can be also added to enhance the feeling of immersion in the quiet, timeless atmosphere of a holiday vacation. Best, from the aesthetical point of view, can be to Khronos-project a sequence showing a short dance performance (see examples here). If the shooting is properly done, the Khronos-projector may be able to render both the beauty of the images and the beauty of the movements, which are essentially space-time figures. The large projection screen will be a window to an interactive live stage.

 

Technical Considerations

The deformable screen is made from a thin, translucent elastic fabric (e.g. spandex-based fibers - better known as Lycra). There are several ways to "scan" this surface in order to extract its depth characteristics. A rather conventional technique would rely on a 2D scanning rangefinder. After giving the problem some though, I decided to study a cheaper technique not relying on any laser scanner. In fact, a conventional CCD camera and a special illumination configuration will suffice: using an on-axis infrared source (such as a LED ring) and a conventional CCD camera (with an IR filter), it should be possible to acquire a gray-scale image indicating, at each location in the surface, the actual inclination between the illuminating ray and the surface-normal; if both the camera and the light source are relatively far from the screen, and placed so as to share the same optical axis (see figure), then it is relatively easy to compute the actual deformation of the screen from the gray-scale map data. To enhance infrared reflectivity and diminish the eventuality of projector hot spots, the spandex-based fabric is plastic coated on the inner side.

Click on image to launch video [WMV- 9.4MB]

 

Using this setup, the reconstruction of the deformation field can be done in a very precise way [17] or in a more loose, "heuristic" way, in the case of the very simple deformation patterns produced by a single or multiple points of pressure: the center of the pressure (X, Y), is the center of gravity of the dark "spot", and the actual pressure is just related to the radius of the spot (see figure on the left). This computation can be done extremely fast, using a dedicated image processing circuit developed in our lab: using our Vision Chip, up to a thousand images can be treated per second [8]. Among other things, this massively parallel processor (containing one elementary processor per pixel) is capable of thresholding, extracting the center of gravity of the resulting binary image, and calculating the size of the "white" area, all this (and more) at a rate of thousand images per second.

Recently, a compact, USB-based version (SR3300) has been put on the market by Nippon Precision Circuits. Since this chip has a reduced resolution (48x32), the levels of pressure resolvable by the screen is presently limited to around fifteen, which is however quite sufficient for most applications (effective spatial resolution is much larger because the chip computes the center of gravity of the image, thus achieving super-resolved pixel accuracy).

 

Preliminary tests were performed validating each aspect of the working principle. These tests consisted on projecting images on a screen made of Lycra and estimating eventual occlusion and shading problems using Vision Chip, as well as implementing a prototype real-time 3D shape reconstruction algorithm using Vision Chip, infrared sources, and solid - but curved - surfaces [10]. Finally, at the present date (5 July 2005), the proposed system is fully functional (although a different lighting configuration was finanlly used, using low-angle infrared sources instead of on-axis sources, see [14] for details).

 

Interaction with Light

Since the detection is based on optical technology, the screen can also be driven by light beams (click on opposite image for a video demo). The "pressure" is related to the size of the light spot; thus, an ordinary flash lamp with a focusing ring can be effectively used to command position as well as pressure (see video on the right). This double modality of interaction makes the deformable screen interface extremely attractive for a variety of applications, other than running the Khronos Projector installation. It is possible to have long-range interaction using ordinary flash lamps, as well as short range, physical interaction with the screen.

Click on image to launch video [WMV-2.4MB]

It is interesting to note that this deformable screen, though first conceived as a dedicated pressure-sensitive and deformable screen for the Khronos projector, may represents itself an interesting human-computer interface for defining and visualizing arbitrarily-shaped slices of volumetric data (e.g. medical body scanner images). In particular, it can be a starting point for developing a pre-operatory interface capable of showing inner body sections mapped onto complex surfaces, just as they would appear to the surgeon during an actual operation.


Video Content & Experiments

 

Perhaps the most interesting "khronos projected" videos are those which are able to produce simple spatio-temporal paradoxes and/or unusual visual effects.  Irreversible phenomena are an interesting subject to shoot. As with the conventional cinemascope, these sequences can be shown backward in time, but new, unusual effects can be brought with the Khronos projector. This is possible by shooting sequences where time and space remain orthogonal narrative parameters.

 

Click on image to launch video [MOV- 752KB]

Click on image to launch video [WMV- 5.6MB]

 

In this sense, the Khronos projector is a suggestive tribute to Einstein's Theory of Relativity: the temporal relationship between two physically separate events is a perception relative to the observer. Now, the Khronos installation is no more -and no less- than a suggestive tribute to the theory (in its 100th anniversary!) because it only describes through a metaphor an otherwise well established scientific fact: that the universal Time Arrow is just an illusion, "although a convincing one" as Einstein himself put it.

 

In any case, it is not a serious platform for recreating any typical Relativity paradox, because the interactive projection infringes one fundamental law of Nature: although it is true that temporal relationships are relative to the observer's inertial frame, causality is not relative (this follows from the fact that information cannot travel faster than light). Two events that are in causal relationship should therefore always maintain their temporal order. The Khronos projector breaks this rule, but this is precisely what makes the experience interesting and fun!

 

Also, the result of visualizing simple gestures using the Khronos projector can be strikingly weird, probably because our visual system is fed with information that almost make sense - but for some subtle, continuous disturbance on the temporal coordinates of each pixel. The result of this reconstruction effort is that sometimes the deformation is no longer interpreted as the result of an isometric rotation or translation of the whole subject, but instead as a tremendously deformed gesture.

 

There are some interesting things to explore here concerning the emotional response to such "impossible" gestures, and perhaps the Khronos Projector can effectively be used to explore some cognitive aspects concerning our visual system (for instance, how motion is integrated in the process of face recognition).

 

Exploring Time Lapse photography with the Khronos Projector.

Time lapse photographic sequences are formed by taking a snapshot every minute, hour or day, from a fixed camera shooting at a natural or artificial landscape. Some classic examples are a one hour sequence of a sunset over the sea (e.g. one snapshot per minute), a 12 hours shooting of a city as seen from a skyscraper (e.g. one picture every five minutes), etc. Interesting effects may result from the gradual change of skyscraper's shades, the reflection of the Sun on the windows, the changing color of the sky and the lights on the streets, etc..

For instance, in the image on the right a “Time-Punch” brings the night as a dark eye in the middle of the sky. More simply (and classically), a spatio-temporal gradient can be formed on the image by selecting a plane temporal filter. The source material for these images is a time-lapse photographic sequence lasting for 3 hours, with a 3 minutes interval between frames. The view is from an apartment with a dominating view over central Tokyo.

 

Click on image to launch video [WMV- 6.9MB]

 

 

A bunch of interesting experiments can be performed by just tuning a few parameters of the Khronos program. These experiments may produce curious and aesthetic effects, or represent an interesting interactive experience per se. They also point at some ideas that may help contriving a custom-tailored movie for the Khronos projector. These options will remain invisible during the exhibition so as to avoid confusion and reduce the interaction-learning time to an absolute minimum (no learning necessary at all, only intuitive interaction by "pressing" the screen). Some pre-tuning will be operated beforehand, so as to simplify and purify the interactive experience. Parameters can however be changed from time to time.

  • Painting with Time: by slowing down the reaction time of the temporal surface filter, it is possible to hold for a while the time coordinates at different parts of the scene. Time is spread by the user like oil on a canvas. (A "gravity effect" is also implemented that produce a temporal smearing of the image  towards the bottom).
  • Bright Future mode: brightness of the image is proportional to the temporal coordinate, such that the present is totally black, and the last image of the sequence is normally lit. This enhances the feeling that the future is being "discovered" (the opposite, i.e., "Dark Future" mode produces a quite disturbing feeling, metaphorically rendering our impossibility to see the future).

 

  • Minimalistic temporal filters: for instance defining the cutting surface as simple square windows, and making discrete the way it translates in time and space too. The image is therefore decomposed on cubistic square-shaped islands of time, producing a "Vasarelesque" Sam Loyd's spatio-temporal Fifteen.
  • Time-tunneling: rapid background relaxation: the user only sees the future in a very small window, and has to rely on memory to reconstruct a ubiquitous-now. In an extreme situation, we get the effect of a shower of time-bullets all over the image.
  • Aging Saturation: other parameters have been implemented allowing for instance to produce subtle changes in the color of pixels as these merge in time; for instance, in "Aging Saturation" mode, going back in time will de-saturate colors, so the portion of the image would turn grayscale, somehow reminding us of "old" times of black and white photography.
  • Spatio-Temporal Ripples: the spatio-temporal disturbance propagates on the screen as water-like waves (see image on the right).

 

  • Chromatic Time Arrow. The Khronos Projector interactively "mixes" time and space; however, it is also possible to mix other "spaces" - for instance the color space, the motion-flow field... or even higher-level semantic spaces such as the result of a segmentation of the image based on face or gesture recognition, etc. Time would "freeze" around a sad face, or accelerate around the shape of a person running. A (rather simple) example of per-pixel luminance/time entanglement (speed of time evolution is controlled by the pixel luminance) can be seen by clicking on the image on the left. It is interesting to note that the projection will either converge to a final static image, or cycle around a finite set of images.

Click on image to lauch video [WMV, 9.6MB]

 

  • Video and LIVE Video modes. The Khronos Projector is usually set so as to display the first image of the sequence when there is no physical interaction with the screen; although I believe this contributes in a meaningful way to the purpose of an installation (i.e. the user controls Time), the Khronos Projector features also two other modes: Video and Live Video modes. In the first mode, the images in the buffer are streamed onto the screen, and people can interact with this pre-recorded video content by accelerating or decelerating sections of the moving image (click on image to launch video [WMV, 23MB]).

 

  • Alternatively, in Live Video mode, the spatio-temporal volume of data is updated in real time through a webcam sitting in the exhibition room or somewhere else. At any given time, the spatio-temporal volume will be formed by a set of snapshots taken every minute during the last 12 hours, or every fraction of a second during the last minute. At every capture, a new picture is added to the pool of images, while the oldest is discarded - forming a first-input first-output buffer of images (an idea inspired from the Last Clock art-installation [7]). This way, the Khronos projectors will display a live video in the absence of interaction (click on images to launch video [WMV, 22MB] and [WMV, 21MB] ).

 


Video & Snapshots Examples

The sequences fed to the program may be composed of several hundred high-resolution images; it is not the number of images that slow-down the real-time rendering, but the size of the images, since the temporal fusion algorithm must sequentially treat every pixel (when not using 3D textures). Using an Athlon 64 Processor 2.2 GHz (roughly equivalent in computing power to a Pentium 3GHz) with a 3D graphic acceleration card NVIDIA GeForce 6600 (512MB), the temporal fusion algorithm achieves smooth real-time rendering (both in 2D or 3D mode) for VGA (640x480) images with 32 bit color resolution. Much higher resolutions can be used to produce high-quality snapshots for printing purposes for instance. Click here or on the images to see many video-demos & snapshots of the Khronos Projector in action.


Demo Applets

The demo applet (click here or on the image on the left) is a simplified version of the software that runs the Khronos Projector installation. It accepts input from mouse and touch-screen. To be able to run the program online, you should have Java installed in your browser. This version (coded in Processing) will continue to evolve (in particular, I would like to enable uploading movies from webcams or personal files).

 

Note: This Java applet is not as fast as the C++ version which has been optimized for real-time rendering using 3D textures on graphics accelerating hardware (and consequently, it's able to handle high-resolution image sequences in real time). Size of images is kept small (320x240) and length of the sequence is somehow reduced in order to minimize the uploading and execution time (for the same reasons, the spatio-temporal blending function has been simplified, which produce visible steps instead of a smooth temporal gradient between successive image layers).


Conclusion and Remarks

Something new can be expected to emerge from this interactive experience: a new perspective on the recorded events and their temporal relationship - something perhaps not even accessible to the film maker in the first place. The spectator will be an explorer, waking her/his own path on the bulk spatio-temporal data of the movie, not helped nor constrained by the temporal arrow that fills space, dictates causality and creates understanding in a usual movie projection. At the same time, he will certainly feel lost: for to understand what's going on, will now demand an active participation, an exhaustive and conscious exploration of each portion of the visual space. We can say that by shattering the spatio-temporal volume of the movie, the Khronos projector forces our conscious attention over silent yet powerful perceptual-unification processes that encompass intensive recalling, spatial memory skills and logic reasoning. From my own experience with the Khronos Projector, I believe this can be a puzzling and recreational experience for the public.

Presently, even the film maker less prone to linear-narratives is forced to integrate the screen-ubiquitous time arrow as a fact in her/his work. Therefore, from the creator's side, something new may also emerge from the idea of having a Khronos Projector accessible to her/his spectators: a new way of approaching and conceiving a visual narrative altogether. (By the way, I wonder how many film makers have really integrated the fact that a large audience can nowadays see their work using a digital player - and thus being able to stop, rewind, replay and even skip some parts, just as if their were reading an illustrated book. The result is an interactive "audience's cut": people edit and post-produce the movie at home.) For instance, imagine a version of Hitchcock's "The Rear Window" prepared for the Khronos-projector: as in the original movie, the whole story is to be reconstructed from pieces of sequences taking place simultaneously, but because of the hidden correlation between facts, understanding must heavily rely on memory. Now, thanks to the Khronos interface, we will be able to go backwards or forward in time at - literally - the desired window. It would probably destroy the interest of the whole movie, since suspense is maintained thanks to our actual ignorance and perpetual conjecturing. However, the movie in question was shot only to be seen in a movie theater, using conventional reproducing hardware, and having in mind that the projecting hardware was precisely the kind that does not allow the public to freely explore the space-time volume. The "Rear Window" for the Khronos projector will perhaps contain several possible interleaved stories, depending on the way the space-time volume is explored.

In a word, the Khronos projector installation suggests the possibility of freeing both the film maker and its audience from the constraint of a pixel-ubiquitous time arrow [18]. Despite this new degree of freedom, the experience is not yet a simulation: during a Khronos projection, we cannot change the nature of the pre-recorded events, only the perspective, only the way we perceive their temporal relationship. For a moment we are like ghosts able to wander at will, both in space and time, in a world of images frustratingly inaccessible. Interaction is impossible, observation is all we can do. Yet this "controlled" observation can produce deep changes in the way we reconstruct the patterns of causality, exposing a hidden multitude of subjective realities. In La invencion de Morel (1940), Bioy Casares describes a character trapped in a three-dimensional recording, which keep repeating over and over again. The character soon discovers that this world is being created by a machine - a sophisticated omni-projector - and that the plot has been recorded years ago, immortalizing the characters, their bodies and perhaps also their souls. To his despair, he also discovers that he is utterly invisible to these people. Yet he can wander among them and fancy some interaction, stop the projection, launch it again, and even record himself on it: in a sense, he is "dubbing" reality. The Khronos Projector gives us a glimpse on this thrilling possibility of being at the same time spectator of a fixed movie content, and also and in a strong sense, director in a personalized post-production. One aspect of this project that I find particularly appealing is the fact that though the interface represents an entirely new visualization technique, it allows for the exploration of otherwise conventional movie content. This may arouse the curiosity of anyone having access to a camcorder. People may want to try their own movies trough the Khronos projector. This also opens the door to the contribution of other artists' works in present and future exhibitions, which may find the project challenging and/or enriching enough.

For more details on this project as presented at SIGGRAPH 2005, Emerging Technologies, take a look at the slide presentation [14].


Future works

Last, I would like to point to some ideas for improvement of the present interface as well as further development of the whole idea:

 

  • Full-body interface, capable of capturing complex and large deformations of the fabric. Using it, it would be possible to physically plunge into the video cube. A sensitive and large screen is currently under development using thin latex sheets instead of spandex (a similar use of this material has been magnificently demonstrated by multimedia artist H. Tanahashi [16]).

Click on image to launch video [WMV, 1.4MB]

  • Add other modalities to enhance the experience, such as sound or voice. A sample sound like stretching-rubber may be heard as the surface is stretched and deformed, contributing to an organic, non-digital experience. In the case of an interaction controlled by throwing pebbles onto the deformable display, the formation of "time-ripples" on the surface could be accompanied by the sound of a stone plunging in the water. In sequences involving talking persons, it would be interesting to intelligently integrate the audio-track, for instance by clipping and attaching phrases or words to precise spatio-temporal coordinates (i.e. events). Then, sounds, words or phrases will be triggered and modulated by the user pressure onto the screen.
  • 3D Khronos projector, using GLUT Stereo frames (with LCD shuttered goggles). Fed with stereoscopic video, the Khronos Projector would display time-warped three-dimensional objects. (I cannot resist citing here T. Iwai's tour de force: he built the real thing, i.e. a real three-dimensional "buffer" - a rotating house - that is time-sliced thanks to synchronized light beams, resulting in the illusion of a is physical "morphed" object thanks to image retention in the retina [15]). In an interesting variation of the idea, parallax could even be used to adjust the time coordinate (so that when trying to see "behind" some thing, would in fact be transposed as to seeing "before" (by rotating the space-time volume).
  • Use image databases not necessarily coming from "slicing" video or time-lapse photography. As said above, an interesting example database may be a set of body scanner images; but it would also be interesting to apply the interactive and smooth "image-fusion" algorithm of the khronos projector to geological/aerial/historical maps, architectural or mechanical drawings, etc: in a word, to any set of images corresponding each to a different information layer. (Additional "data" would appear under the pressed area, smoothly blended/fused with the rest of the image.)
  • Develop a completely different interface, that would enable sculpting and carving into the spatio-temporal volume of the movie as if it were made of clay; this simple deformable tissue-based screen could be replaced by a more sophisticated "tangible" display (using the amazing "GelForce" interface developed at the University of Tokyo [11], or a "tangible" display developed at MIT [12], or even an interface capable of complex haptic feedback (such as Sony "TouchEngine") to achieve direct "tactile" sensing of the spatio-temporal object being sculpted and manipulated (e.g. spatio-temporal contours). This work is closely related to [3].
  • Use the pressure=time paradigm not just to explore spatio-temporal content, but also to create new content. For instance, a similar interface could be used to search and place images, alphanumeric data or sound in 3D space, thus converting the space into a Volumetric Note-Pad. Of course, the depth coordinate could readily represent “time” (hours, days or months), making for a Volumetric Agenda.
  • Last but not least, the Khronos-projector can develop into an interface for decrypting what's going on in complex physical phenomena: for instance, recorded activity of neurons in the surface of the brain or femto-second quantum interaction phenomena in a lattice of atoms. In both cases, to be able to "evolve" in time some parts of the image but keep others in the present could be helpful to uncover long-range, causal relationships.

Contributions & Acknowledgements

First of all I would like to thanks the director of my laboratory, Ishikawa Masatoshi, for allowing me to spend part of my serious research time on the final phases of this project - which, strictly speaking, is far from being a purely scientific occupation (is it?). This is possible because he sees a deep interconnection between Art, Science (and Technology), as these are parts of a large, noble (and perhaps better left undefined) human endeavor. I think It is no longer necessary today to prove that the most interesting results and discoveries in any of the above cited fields come from a cross-disciplinary approach - or at least with a true open frame of mind.

Had these noble arguments not be sufficient to justify this escapade... well, let's not forget about the fun of putting it all together and enjoying exciting discussions with people from very different fields. In particular, I am in debt with the following people who contributed, each in their own way, to this work: Anchelee Davies, "Jeff"&Megumi Schneider, Marco Cuturi, Philippe Pinel, Eric Augier, Stephane Perrin, Gonzalo Frasca - and especially Monica Bressaglia, Luc Foubert and Jussi Angesleva

 

 

Alvaro Cassinelli, Tokyo, 10 January 2005

Latest supporters and content contributors (update 24 February 2006):

 


Exhibition/Awards History

 


Contact

 

Alvaro Cassinelli. Assistant Professor (Research Associate).
Ishikawa- Komuro Laboratory,

Department of Information Physics and Computing.

The University of Tokyo. 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 Japan.
Tel: +81-3-5841-6937 / Fax: +81-3-5841-6952
Email: alvaro (at) k2.t.u-tokyo.ac.jp / cassinelli.alvaro (at) gmail.com


Some interesting references and links

 

[1] S. Jaschko. Space-Time Correlations Focused in Film Objects and Interactive Video. Published in: ISEA Papers, Nagoya/Japan (2002). Also published in Future Cinema.The Cinematic Imaginary after Film. Edited by Peter Weibel, MIT Press, (2003).

[2] A. W. Klein et al. Stylized video cubes. Proc. the ACM SIGGRAPH Symposium on Comp. Animation, pp. 15-22, (2002).

[3] S. Fels, E. Lee, and K. Mase. Techniques for interactive cubism. In Proceedings of the ACM Conference on Multimedia, pages 368–370, October (2000).

[4] Camille Utterback. Liquid Time Series (2000).

[5] Toshio Iwai, Another Time, Another Space (1993).

[6] Z. B.. Simpson. Novel Infrared Touch-Screen Technology and Associated Artwork. Emerging Technologies, SIGGRAPH 2004, Los Angeles, (2004).

[7] J. Angesleva and R. Cooper. Last Clock. Emerging Technologies, SIGGRAPH 2004, Los Angeles, August (2004).

[8] T. Komuro, I. Ishii, M. Ishikawa and A. Yoshida. A Digital Vision Chip Specialized for High-speed Target Tracking. IEEE transaction on Electron Devices, Vol.50, No.1, pp.191-199 (2003).

[9] A. Cassinelli, S. Perrin and M. Ishikawa. Markerless Laser-based Tracking for Real-Time 3D Gesture Acquisition. SIGGRAPH 2004, Los Angeles, (2004).

[10] Y. Ugai, A. Namiki and M. Ishikawa. 3D Shape Recognition Using High Speed Vision and Active Lighting Device. The 4th SICE System Integration Division Annual Conference (SI2003) (Tokyo, Japan, 2003.12.19) / Proceedings, pp. 113-114, Dec. (2003).

[11] K. Kamiyama, K. Vlack, T. Mizota, H. Kajimoto, N. Kawakami and S. Tachi. GelForce. Emerging Technologies, SIGGRAPH 2004, Los Angeles, (2004).

[12] H. Ishii, C Ratti, B. Piper, Y. Wang, A. Biderman and E. Ben-Joseph. Bringing clay and sand into digital design - continuous tangible user interfaces. BT Technology Journal, Vol. 22, No. 4, pp. 287-299, October (2004).

[13] Levin, Golan. An Informal Catalogue of Slit-Scan Video Artworks, 2005. Web: <http://www.flong.com/writings/lists/list_slit_scan.html>.

[14] A. Cassinelli and M. Ishikawa. Khronos Projector. Emerging Technologies, SIGGRAPH 2005, Los Angeles (2005). One page abstract [PDF-0.5MB]. Video Demo [WMB-40MB]. Power Point presentation (with video) [PPT-10MB].

[15] Toshio Iwai with NHK Science & Technical Research and Laboratories, Morphovision: Distorted House (2005)

[16] H. Tanahashi (from Post Theater - a theater company without theater and without a company), SkinSites: a site-specific multi-media dance performance series (2005)

[17] See for instance Reflectance Map: Shape from Shading, Robot Vision, Berhold Klaus Paul Horn, The MIT Press, McGraw-Hill, pp: 243-277 (Chapter 11).

[18] A. Cassinelli, Time Delayed Cinema [PPT-28MB], invited talk at Microwave/Animatronica New Media Art Festival, Hong Kong 4-15 Nov. (2006)


[ Home | Sensor Fusion | DIC | Vision Chip | Optics in Computing | Members | Papers | Movies ]


Ishikawa- Komuro laboratory WWW admin: www-admin (at) k2.t.u-tokyo.ac.jp
Page last updated by A. Cassinelli, 30.01.2006