Viewpoint Dependent Imaging and VR

Motion parallax refers to retinal image motion generated by an observer’s viewpoint relative to stationary objects at different distances; the objects will be seen in depth and/or will  appear to move, depending on fixation distance and the velocities of retinal image and motion of the observer’s viewpoint.

.

Motion Parallax

Since Wheatstone created the first binocular vision system for stereoscopic imaging in 1838 there has been an important depth cue missing in still image representations.  This missing cue, motion parallax,  is nearly as important as that of binocular disparity –and in unmediated perception has the potential to work in tandem to create a stronger sense of relative depth.  Motion parallax cues stem from the perception of differential motion created by objects at varying distances in a scene as perceived by an observer with a moving point of view.  Nearby objects shift a greater distance than farther objects –and can also occlude the full view of the more distant objects.  2D movie cinematographers often make use of a moving camera to create a sense of greater presence via the depth cues provided by motion parallax. In 2007 Johnny Lee devised a head tracking solution using the Nintendo Wii that created motion parallax and perspective projection shift in a non-stereoscopic 2D image on a computer screen.

Walt Disney sponsored the development of a multi-plane camera system to create depth cues via the differential motion layered 2D image planes.  This simulation of one aspect of motion parallax added a greater sense of depth to his studios 2D animated film productions:

 .

Interactive Motion Parallax

When a stereoscopic camera is put into motion to add the depth cues made possible by motion parallax there is still an important aspect in the reproduced 3D image that is missing.  This is the interactive motion parallax cue made possible by a spectators active head movement.  In non-mediated reality the motion of an observer’s head allows for peering around objects as the point of view of view is shifted.

The limitation of a non-interactive stereoscopic point of view is most readily apparent when the camera is fixed and the spectator shifts their point of view from side to side.  Objects in the stereoscopic image appear to stretch as the mind tries to compensate for the lack of motion parallax that would occur in an actual 3D scene.

That problem can be resolved by dynamically generating a stereoscopic image which is linked to the motion of the spectator’s point of view.  This linkage can be accomplished  by tracking an observer’s point of view and using that information to shift the point of view of live video cameras (via a robotic motion control system) or the point of view of a computer generated image via shifting the synthetic cameras.  Scott Fisher exhaustively researched these techniques for his 1981 MIT Master’s Thesis, Viewpoint Dependent Imaging: an Interactive Stereoscopic Display

Credit for creating the first viewpoint dependent computer imaging  system is given to computer graphics research pioneer, Ivan Sutherland.  Sutherland’s viewer, which came to be called The Sword of Damocles,  was mounted on the spectator’s head in a manner that provided data on the spectator’s changing point of view to a computer that then calculated and recalculated transformations to the stereoscopic image pair supplied to the displays.  The resultant coupling of viewpoint to displaypoint created a marked sense of the spectators immersion within the space of the 3D image.

 .

Virtual Reality

Computer graphics pioneer Wayne Carlson has written a Critical History of Computer Graphics and Animation and posted it to his web site at Ohio State University.  Section 17 covers his comprehensive take on the history of Virtual Reality and includes a link to Ivan Sutherland’s seminal 1968 paper on his immersive viewing device, A head-mounted three dimensional display.

I first  heard of Sutherland’s HMD during a presentation break in the 1969 Biofeedback Research Society Conference.  The excited researcher who descibed it to me spoke of its potential to allow the viewer to fly through space at molecular scale and to eventually be able to manipulate representations of molecular structures.  I gained a deeper awareness of the field of VR research with the publication of the October 1987 edition of Scientific American and a subsequent presentation by Scott Fisher and Brenda Laurel at an event at the American Film Institute.

These brilliant researchers went on to found the innovative VR company Telepresence Research in 1990, which along with Jaron Lanier‘s seminal 1984 company VPL  Research lead the efforts to make VR systems more widely available.

scientificAmerican_Asahi

.

My Initial Work in VR

There was an explosion in research into viewpoint dependent imaging system work during late eighties and early nineties.  Jaron Lanier popularized the term Virtual Reality (VR) for this type of immersive computer graphics and his innovative company VPL Research produced the first commercially available turnkey systems.  A spreading awareness of the potential of VR fired the imagination of the entire culture and a wide range of approaches to the new medium began branching out in many directions.  The emerging field spread along a broad spectrum from medical research to entertainment applications.

I was invited to present a paper at The Screen: A Dialogue of Cultures conference held in Moscow in January  of 1991.  I chose to present, My Work in Absolute Animation and Some Ideas About Extending that Work into the New Medium of Virtual Reality.  My growing interest in the medium of VR lead me to attend conferences such as the Stanford Research Institutes Virtual Worlds Symposium held in June of 1991. where I had the chance to meet researchers breaking new ground in the field and learn of their work.  During a break between sessions at the SRI symposium one of these researchers, Warren Robinett, listened intently to my passionate ideas about gesture based absolute animation.

It turned out later that Warren had become involved as an advisor to the  Art and Virtual Environments Project at the Banff Centre for the Arts.  When a call for artists to submit project proposals went out I felt that my ideas for absolute animation were too far ahead of the then current state of the art to be effectively realized. I came up with an idea for what seemed a more feasible project and consulted with my friend and colleague Stewart Dickson. I wanted Stewart to work with me in realizing a project based upon his interest in the mathematics of minimal surfaces and their potential as sculptural objects.  This project proposal, A Topological Slide, was accepted and Stewart and I were able to realize this first step into working with VR.

Once we had the first iteration of the project up and running we noticed a distinct problem with axis flipping when traversing the polar areas of the parametrized minimal surfaces being explored.  In writing in the June 1994 issue of Wired  magazine about his experience riding an early prototype of the Topological slide John K. Bates describes crossing an unstable polar region and nearly tumbling into a “wipe out”.  Warren Robinett had come up to Banff to see how things were going and when we demonstrated the problem we were experiencing he told us that it was most likely due to the well known phenomenon of gimbal lock in inertial navigation systems.

Warren suggested that we not base the transformational matrices on classic three dimensional Euler angles, but look into a paper his former roommate Ken Shoemake had authored,  Animating Rotation with Quaternion Curves in which he describes the application of four dimensional quaternion equations to solve the problem of accurately calculating rotations in Cartesian coordinate space.  As Warren had surmised, once the code for the slide was reworked to implement the quaternion solution the flipping problem ceased.  As I write this in 2014 quaternions have been widely adopted in 3D CG animation programs and real-time game engines

.

Expanding Perception

Warren Robinett has explored many aspects of viewpoint dependent imaging and the potential to expand our perception of the world provided by VR systems (and in an unrelated achievement is also credited with creating the first video game Easter Egg)  .  In his essay, Electronic Expansion of Human Perception, written for the fall 1991 issue of the Whole Earth Review he writes:

The true potential of the Head Mounted Display, the piece of gear that has enabled Virtual Reality, is not that it allows you to enter into a fantasy world, but that it allows you new ways of perceiving the real world“.

Since childhood I had been curious about what it would be like to extend our senses into ranges normally not available.  For instance what would radar beams sweeping the sky look like if we could shift their wavelengths to the visible wavelengths of the electromagnetic spectrum?  As a first year student at CalArts in 1970 I developed plans for an installation that involved placing an  ultrasonic transducer (ultrasound microphone) in a gallery space, heterodyning the ultrasound waves into the sonic range, and filling the space with those now audible sounds via sonic transducer (loudspeaker).  I had researched the ultrasonic equipment I would need to rent for the installation, however there was a chronic lack of gallery spaces at Villa Cabrini where I could install the piece and so the project was not realized.

When I shared my ideas about sensory extension with Warren he told me he had been thinking along that same line and told me about his recent paper where he presented his ideas on revealing the imperceptible.  Warren also shared with me his pioneering work on the Nanomanipulator project at the University of North Carolina.  This project expanded upon one of the many ideas he introduced in Electronic Expansion of Human Perception. The Nanomanipulator employed the force feedback capabilities of the  Argonne Remote Manipulator (ARM) linked to a Scanning Tunneling  Electron Microscope (STM).  The user could control the motion of the STM tip via the ARM in order to feel the surface contours of the scanned object as displayed in an HMD. In a later development the feedback mechanism of the STM could be decoupled and the STM tip used to manipulate the molecular structure of the surface.
.
earlyNanomanipulatorProject

.
.

Embodiment

In my 1991 paper, My Work in Absolute Animation and Some Ideas About Extending that Work into the New Medium of Virtual Reality, I wrote:

“I envision the development of a bodysuit fitted with a fine mosaic of tiny hydraulic sacs which both register and stimulate pressure and temperature on the skin to provide for an extremely enhanced sense of tactile presence. With such a suit it would be possible to feel the cool splash of a shimmering swirl of intricately choreographed droplets, the soft breath of a warm breeze, the cool flow of slipping beneath the surface of an enveloping sea, or the warm touch of a loving companion.

Another interesting development of the bodysuit would be to mount it in a large gimbal ring similar to those used for astronaut training. The ring would have the capability of simulating free movement through space by shifting attitude relative to gravity (pitch, yaw, and roll) and applying acceleration and deceleration forces. The addition of force feedback in the form of small hydraulic pistons attached at the joints [of the participants limbs] could sense exerted force and apply resistance allowing for a sense of physical interaction with the virtual world. One could climb virtual stairs, swim through a virtual medium, or leap from and land upon a virtual surface.

The metamorphosis from one body to another –part by part or in whole– could be accomplished, enabling the transformation from a human form to that of other creatures. One could be a dolphin swimming through the sea, leap into the air and become an eagle soaring up into the clouds, then glide to a landing on an open plain, and transform into a member of a gamboling herd of wild horses.”

In 2014 Max Rheiner and his team at Zurich University of the Arts successfully developed Birdly, a VR system that empowers a participant to embody the form of a flying bird.

Birdly was one of nine VR pieces in the New Frontiers program of the 2015 Sundance Film Festival.  The availability of the Oculus Rift (and other inexpensive but high quality wide FOV HMD viewing devices has stimulated an explosion of interest in VR and its intersection with the art of film.

 

.

.
A few more examples of viewpoint dependent imaging via VR can be found on my Motion Capture for Artists course page on Immersive VR:

https://michaelscroggins.wordpress.com/fe417-motion-capture-for-artists/videos/immersive-vr/

.

.

…more to come