“Des milliers d’yeux avides se penchaient sur les trous du stéréoscope comme sur les lucarnes de l’infini” –Baudelaire 1869
Since the invention of the stereoscope people have desired to create binocular imagery that did not involve the encumbrance of placing ones face into a viewing device or the wearing of special glasses. Many solutions have been put forward, but to this date none have provided an illusion of depth as convincing as that of natural binocular vision. The quest for comfortable unencumbered 3D continues to drive innovation, however the following technique does not appear to actually meet either criteria –and is a great spoof of the notion that 3D glasses are a seriously restrictive encumbrance.
One of the most promising approaches to autostereoscopic presentation currently under development is based on the concept of integral imaging which was first proposed in 1908 by Gabriel Lippmann. The integral imaging approach places an array of spherical microlenses (similar to a lenticular cylinder array) in front of the image, where the portion of the image seen through each lens is different depending on viewing angle. Thus rather than displaying a 3D image that only works in the horizontal direction, it reproduces a 4D light field, creating stereo images that exhibit full parallax in any direction that the viewer moves. Research on versions of this approach for electronic image display are in development, however much work remains to be done before it is commercialized for use in flat panel displays (Nvidia has demonstrated some promising work on near-eye light field displays for use in lightweight HMD’s).
In 2011 Japan’s NHK demonstrated research on a low resolution version of a spherical lens matrix integral image TV. This glasses free approach to stereoscopic display was the result of an initiative designed to explore the next technical development to follow the successful marketing of ultra high resolution television.
The illustrations below, taken from Michael Halle’s paper Autostereoscopic Displays and Computer Graphics, provide an excellent comparison of three essential autostereoscopic display technologies.
The lenticular and parallax barrier approaches to both still and moving images have been successfully commercialized. Most people interested in motion stereoscopy are familiar with the lenticular screen of devices such as the Fuji Finepix W3 camera display screen or the parallax barrier display screen of the Nintendo 3DS hand held gaming platform. The lenticular LCD screen of the W3 employs a vertical grid composed of alternating columns sliced from the left and right images of a stereo pair. A corresponding grid of cylindrical lenses is aligned on top of the image so that the viewers left eye sees the columns for the left image and the right eye sees the columns for the right image. The parallax barrier screen of the 3DS works in a similar way, however instead of the left and right images being separated by cylindrical lenses they are separated by a mask composed of vertical slits. The diagrams below provide two forms of visual explanation of the binocular stereoscopy processes (an extended approach employs more that two images in a multiview array).
The image below is from Walter Funk‘s comprehensive paper, History of autostereoscopic cinema, which covers the topic from the beginnings of autostereoscopy in the 1800s, to the development of motion capability and it’s subsequent evolution. Russian autostereoscopic cinema has a long history spanning several decades, peaking with the creation of numerous public stereokino theaters. Here we see the 1941 installation and calibration of a wire radial-raster and reflective screen system. The raster lines converge with the screen’s plane and viewing plane.
Tom Peterka’s 2007 Ph.D. dissertation, Dynallax: Dynamic parallax barrier autostereoscopic display, expands upon the development of static parallax barrier displays via the use of an LCD generated parallax barrier. The LCD barrier can be dynamically modified in real-time using head tracking to optimize its position –and thus the view of the interleaved left and right image segments displayed– so that the spectator is always in the ideal position to perceive the stereoscopic image.
Zecotek Display Systems is working to create high resolution autostereocopic multiview display systems based on “time-sequencing” rather than the more commonly applied multiview “space-sharing” technique. Space-sharing incorporates advanced lenticular design elements such as slanting the grid of cylindrical lenses relative to the RGB pixel structure of a video display in order to provide a uniform and regular distribution of pixels across the image (thus improving horizontal resolution relative to vertical resolution).
“Space-sharing” multiview image displays require increasing image resolution via increasing the number of pixels in a display to expand the number of views shown. Zecotek’s “time-sequencing” 3D display technology requires increases in image frame rate to expand the number of views shown. That approach is described in the following material from their website:
Unlike systems based on “space-sharing” (see this link for more details on space-shared 3D), which must share pixels and therefore their native resolution among views, Zecotek’s system uses a proprietary “time-sequencing” technology combined with a patented dynamic system of multiple lenses. This results in a display with more than 90 views with full native (base) resolution in each perspective. It is this large number of views and the extremely narrow angle of each view, which give a complete freedom of position for the viewers within large viewing zones. Zecotek’s prototype offers now ~50° continuous viewing angle..
Zecotek’s “time-sequencing” (see this link for more details) approach also means that the HD resolution does not need to be divided between views. Each view has exactly the same HD resolution as the base screen one. This is because it is display time, as opposed to space (see Space-Sharing Auto-Stereoscopic Displays), that is shared. We therefore require a frame rate that is approximately 2,000 Hz for about 40 views which is readily available using existing and well known DLP back-projection elements.
Back-projection 2D monitors and TV’s have been available for many years and deliver high resolution, high quality images. Their only trade-off is that projection units have more physical depth in their form factor than flat panel displays. (This depth can also be significantly reduced to almost flat panel form factor with the use of special optics).
Zecotek’s 3D multiple-view auto-stereoscopic display with its “time-sequencing” approach can provide the most natural 3D experience, as it allows for a freedom of head movement similar to that required for seeing objects in the real world.
The marketing limitation of the DLP-based back-projection 3D TV for the consumer market is that of the form factor – DLP’s are not perceived as a flat panel (even though with optical modifications form factors can approach flat panel depths). While Zecotek’s technology is fully adaptable to flat panel configuration, this will require matching flat displays with frame rates exceeding 2,000 Hz. Such flat panel speeds are not yet available (as there has been no demand to date), however many industry players have these in development for other applications. With rapid advances in OLED’s (Organic Light Emitting Diodes) resulting in frame rates over 2,000 Hz, and as manufacturing costs of these panels go down, Zecotek’s patented technology will yield a flat panel configuration highly suitable for consumer markets well in advance of those using space sharing systems requiring greater pixel density, in particular as pixel density is directly related to production yield and, therefore, panel cost.
Not surprisingly, autostereoscopic movies are an outgrowth of earlier work with still imagery. As with motion pictures, autostereoscopic still imaging began with the capturing and display of left and right binocular pairs and progressed from the two images of a stereoscopic pair to as many as sixty-four in-between images in a multiview display. A larger number of multiplexed images enhances the perception of motion parallax as the spectator moves their head and can allow them to peer around the objects represented in the image.
University of Western Australia Professor Paul Bourke has prepared a comprehensively illustrated slide show that covers key aspects of multiplexed parallax barrier and lenticular imaging such as those revealed in the following slide presentation diagram.
In 1983 Ellen Sandor devised a form of multiview parallax barrier print she termed ‘PHSCologram,’ an acronym for Photography, Holography, Sculpture and Computer Graphics. Sandor and her colleagues have developed and received several patents for the process, the first of which, Computer-generated autostereography method and apparatus, was filed in 1989. Recent multiview work includes 64 separate viewpoint slices and nears the size limit of what can be done without creating wavefront interference diffraction artifacts.
Bonny Lhotka’s presentation, The Added Dimension – Thinking and Visualizing in 3D, recorded at the 2009 6Sight Conference provides a comprehensive view of her experience as an artist working with lenticular imaging.
In the summer of 2011 I was visiting my friend Rashid Ghassempouri in Paris. One day we were walking along the street of Le Viaduc des Arts on our way home and noticed a set of life size B&W lenticular portraits in the window of one of the galleries constructed in the viaduct archways. The gallery was closed, but I was so fascinated by my first encounter with this form of lenticular photography that when I had a chance to walk by the place again I did so. This time I found the gallery open. The photographer, Henri Clément, was present and gave me a personal tour of his work. We spoke at length and he explained aspects of his technical and aesthetic process. As I was leaving he gave me a few small prints to bring home. I’m happy to be able to share these prints in the Explorations in Stereoscopic Imaging class each year and speak of the additional power to be experienced with the one-to-one ratio of the life size portraits. In Jonathan Tustain’s interview for 3D Focus TV you can get some sense of the techniques that Henri had explained to me. One of the interesting aspects not directly addressed in the interview is that while people can indeed hold relatively still for a second and a half, they in fact do move slightly. This slight movement results in a the effect of a “moving hold” –similar to that used in animation as a method of creating an enhanced sense of presence in an otherwise static pose.
And finally we have these brilliant records of a microcosmic aspect of the Chinese reform and opening up the charm.
Since the 1960’s the idea of the hologram has held an important place in the popular imagination as the epitome of 3D imaging technology. This has led to some misunderstandings as to what a hologram actually is –and the false attribution of the term to any form of apparent 3D projection. A true hologram is a cameraless recording of wavefront interference patterns which makes use of wavefront reconstruction to display a 3D image. Recent developments in the technology used to create the illusion known as Peppers Ghost have been promoted as holographic (even though the floating image effect is based on a 2D video image projection rather than the live 3D actors used with the original Peppers Ghost). The 2D illusion is very convincing from a distance, even though it lacks the actual parallax cues of a true 3D image. The following CNN interview with Uwe Maass provides a look into his inventive work developing the patented digital Peppers Ghost process marketed as Musion Eyeliner.
From the Wikipedia entry on Holography:
The Hungarian–British physicist Dennis Gabor (in Hungarian: Gábor Dénes), was awarded the Nobel Prize in Physics in 1971 “for his invention and development of the holographic method”. His work, done in the late 1940s, built on pioneering work in the field of X-ray microscopy by other scientists including Mieczysław Wolfke in 1920 andWL Bragg in 1939. The discovery was an unexpected result of research into improving electron microscopes at the British Thomson-Houston (BTH) Company in Rugby, England, and the company filed a patent in December 1947 (patent GB685286). The technique as originally invented is still used in electron microscopy, where it is known as electron holography, but optical holography did not really advance until the development of the laser in 1960. The word holography comes from the Greek words ὅλος (hólos; “whole”) and γραφή (graphḗ; “writing” or “drawing“).
Gabor’s term for the product of his invention, hologram, was taken from the Greek words ὅλος (hólos; “whole”) and γράμμα (grámma; “message” or “letter”). While a hologram is recorded onto a photochemical film that is similar in some ways to that used in standard photography, a hologram is not a direct recording of the light intensities of a scene focused onto the film. Instead it is a recording of the interference pattern formed when coherent light waves from a laser are split into two beams with one beam (reference) shining directly upon the film while the other beam (illumination) is simultaneously reflected onto the film from the surface of the object being recorded. The overlapping wavefronts from the two light beams falling on the film generate an interference pattern based upon the additive and subtractive process that occurs as the energy of the wavefronts is reinforced or cancelled. This interference pattern is encoded into the film emulsion. The patterns vary based upon the differing arrival times (phase) of the wavefronts based on the distances the two beams travel on their way to the film. Once the film is developed and illuminated, the phase of the original wavefronts are reconstructed through diffraction and an image of the original object can be viewed.
…more to come.
Jason Geng’s paper, Three-dimensional display technologies, provides a comprehensive overview of a much wider range of approaches to stereoscopic viewing.