©2013 Old City Publishing, Inc. Published by license under the OCP Science imprint, a member of the Old City Publishing Group
Display and Imaging, Vol. 1, pp. 73–95 Reprints available directly from the publisher Photocopying permitted by license only
3D Display Technology Nam Kim*, Anh-Hoang Phan, Munkh-Uchral Erdenebat, Ashraful Alam, Ki-Chul Kwon, Mei-Lan Piao and Jeong-Hyeon Lee School of Information and Communication Engineering, Chungbuk National University, 361-763, Korea Received: April 24, 2013. Accepted: June 19, 2013.
In the last few decades, three-dimensional (3D) display has become very promising, with commercial products awaiting release to the mass market. 3D display has become more compact, comfortable and cheap. In the rapid development of material, computer technology has been the main factor. This paper will review the major trends in 3D-display technology, and covers stereoscopic display, integral imaging display, head-mounted display, volumetric display, and holographic display. Keywords: Stereoscopic display, Light field display, 360 degree viewing angle, Integral-floating display, Holographic optical element, Real-time holography
Cave-wall paintings tell us stories from the dawn of humankind. Those people wanted to tell future generations about their spiritual life. That desire continued with paintings, sculptures, and literary works. In the 1820s, people had a new way to tell their stories: through photography. In that decade, Nicéphore Niépce took the first picture with a device that became the camera. Through development of chemistry and mechanics in the 1870s, people could not only take a picture of a scene but also record movement by capturing a series of scenes. The new art was open, not only used for recording stories but for entertainment. All of that are the beginning of the display era, where a scene can be replayed as beautifully as nature itself. There are several review articles on 3D display technology [1-2]; however in this paper we will review recent trends in this area and some of our results.
*Corresponding author:
[email protected]
73
74
Nam Kim et al.
Figure 1 Stereoscopic 3D display types: (a) anaglyph [3], (b) LC shutter [4], and (c) polarized 3D system [3]
1. STEREOSCoPIC DISPLAY Stereoscopic display, which is one of the oldest 3D display systems, was first proposed by C. Wheatstone in 1838. This type of display was based on stereopsis, where an observer’s left and right eyes receive different perspectives separated by a stereoscopic device that the observer is wearing. Two different images are integrated as a single 3D image in the visual cortex of the brain [5-6]. Since the 1920s, stereoscopic 3D display has been demonstrated on television and in movies. Nowadays, most new televisions, movies, and video games are beginning to utilize stereoscopic 3D systems, especially as a new way to enjoy computer games [7]. Stereoscopic display mostly uses anaglyph, liquid crystal (LC) shutter, or polarization glasses [8-14], which are illustrated in Figure 1. Anaglyph stereoscopic systems use two color-filtered images and glasses that usually utilize red-cyan, red-green, green-magenta and magenta-cyan colors. This system allows presentation of 3D stereoscopic images simply and at very low cost. The main disadvantages of the system are that a lot of color can be lost due to the color-filtering glasses and that if the brightness of the two images is different, the picture may flash or become unexciting [8-9]. The LC shutter system, also known as an active-shutter system, is a stereoscopic 3D display technique that sends the left image to the left eye while the right eye’s view is blocked by the display device and user glasses, then presents the right image to the right eye while the left eye’s view is blocked [1011]. The flickering process continues during the display. This type of stereoscopic display can reduce crosstalk and enables a full-color spectrum for displayed images. A few manufacturers, such as Samsung Electronics, Sony, Panasonic, Philips and XpanD 3D, etc., have already developed and released their own display systems onto the world market since 2008. However, there’s a high cost for the viewing glasses, which require very high refresh rates and special synchronization between the viewing glasses and the display device. A polarized 3D system, which sends polarized images to the corresponding eyes through polarization glasses, was developed by a few manufacturers and
3D Display Technology
75
Figure 2 Schematic diagram for the polarized light stereoscopic display.
now has a large share of the world 3D market. Since the early 2000s, modern LC 3D displays based on the polarized 3D system have been designed for the cinema and for medical applications [12-14]. One example of this type is the polarized light stereoscopic display proposed in 2004, as shown in Figure 2. Since 2010, LG Electronics and Vizio have produced home-viewer technologies using horizontally polarized stripes covering the display. This display uses two thin film transistor liquid crystal display (TFT-LCD) monitors with polarizing filters and a beam splitter, which perfectly combines the two pictures and polarization glasses. The monitor faces are horizontally arranged at 90 degrees to one another. Compared with other stereoscopic techniques, the polarized 3D system provides full color, flicker-free images and is a generally inexpensive system that does not required synchronization between devices. However, it cannot provide full resolution because the polarized system presents right- and left-view images simultaneously. The most significant disadvantages of stereoscopic displays are that they cannot present parallax images, they are uncomfortable because of the viewing glasses, and the crosstalk problem often occurs. A lot of people also complain of headaches and eye aches. So development of glasses-free (autostereoscopic) 3D systems is important in 3D-display technology.
2. Integral imaging display Integral imaging (II), which was first invented by Lippmann in 1908 [15], is used as an exciting new tool for autostereoscopic displays because of its
76
Nam Kim et al.
Figure 3 Basic structure of II: (a) pickup and (b) display or reconstruction [16]
attractive advantages. Integral imaging provides 3D images without the need for any supplementary glasses or tracking devices. In recent years, II has attracted a great deal of attention from many researchers because of its fullcolor, full-parallax moving 3D images, which distinguishes II from other conventional 3D display technologies. In some respects, II is an interesting alternative to stereoscopy and holography. Integral imaging comprises two processes: one is pickup (or acquisition), and the other is display (or reconstruction). In the pickup process, the light rays emanating from a 3D object trace through a micro-lens array consisting of a number of identical lenses called elemental lenses. Each lens samples the color information of the 3D object from their respective regions. All light-ray information passing through the micro-lens array is captured by an image sensor, a charge-coupled device (CCD) camera. All the micro-lenses produce the same number of two-dimensional 2D images called elemental images (EIs), which are recorded on the image sensor. In the display process, the recorded 2D EIs are displayed by a 2D display panel, and each EI is projected onto the corresponding elemental lens. All the rays retrace through the microlens array and produce a 3D image. Figure 3 illustrates the basic structure of the II system. Despite a number of attractive advantages, II has some serious drawbacks, such as a small viewing angle, insufficient image resolution, and a limited range of image depth. Many researchers have been investigating ways to overcome its limitations, but II still suffers from those drawbacks [17-24]. Recent advances in the II technique demonstrated by our research group are presented in this paper. Baasantseren et al. demonstrated a viewing-angle enhanced integral imaging display using two elemental image masks [25]. In this system, the light rays emitted from the EIs are guided by two masks into the corresponding lenses. By directing the rays from a wide elemental image region larger than the elemental lens pitch, more light rays are collected at
3D Display Technology
Figure 4 Structure of viewing angle–enhanced II display using two elemental image masks [25].
Figure 5 Experimental results of proposed method in the nine different positions[25].
77
78
Nam Kim et al.
the integrated image point than that of a conventional system. Thus, the rays’ diverging angles at each integrated image point increase as well. Hence, the viewing angle of the integrated 3D image is enhanced by increasing the angle of diverging rays. By implementing this scheme, the authors confirm that the size of the elemental image region is not limited to the size of the elemental lens, and the location of EI is not confined to the corresponding elemental lens; the angle between two extreme rays among the rays that are collected to reconstruct a 3D image point will increase. For this purpose, a new method uses two additional spatial light modulators (SLMs) has been proposed (Figure 4). Figure 5 confirms that the proposed method successfully creates wide viewable 3D integrated images. Moreover, the viewing angle is enhanced not only along the horizontal direction but also along the vertical direction. The viewing angle of the proposed system reaches 38 degrees in both horizontal and vertical directions experimentally, while the viewing angle of conventional II display is half of the proposed method. Alam et al. proposed another new approach to enhance the resolution of an integral-imaging display using time-multiplexed multi-directional elemental image projection [26]. This method uses a special lens made up of nine pieces of a single Fresnel lens, which are collected from different parts of the same lens. This composite lens is placed in front of the lens array such that it sends nine sets of directional elemental images to the lens array. These elemental images are overlapped on the lens array and produce nine point light sources
Figure 6 Principle of multi-directional elemental image projection scheme.
3D Display Technology
79
Figure 7 System configuration.
per elemental lens at different positions in the focal plane of the lens array. Nine sets of elemental images are projected by a high-speed digital micromirror device (DMD) and are tilted by a two-dimensional scanning mirror system, maintaining the time-multiplexing sequence for nine pieces of the composite lens. In this method, the concentration of the point light sources (PLS) in the focal plane of the lens array is nine times higher than that of a conventional method, i.e., the distance between two adjacent point light sources is three times smaller than that of a conventional II display. Hence, the resolution of 3D images will be enhanced according to the concentration of point light sources in the focal plane produced by each elemental lens. Figure 6 illustrates the principle of the multi-directional elemental image projection scheme for resolution enhancement of an II display. A conventional II display with a PLS configuration uses only one plane illumination, where the angle of the optical axis of the elemental lens is zero. If the angle of plane illumination is not zero, the light is collected at a different distance from the lens axis in the focal plane as shown in Figure 6. Figure 7 shows the system configuration of a multi-directional elemental image projection scheme for resolution enhancement of an II display. This distance between two adjacent PLSs (DPLS) is given by
DPLS = f tan(θ) (1)
80
Nam Kim et al.
where θ is the projection angle and f is the focal length of the lens array. According to Eq. (1), the lateral position of a PLS depends on the illumination angle θ. Therefore, it is possible for one elemental lens to create a number of PLSs if it is illuminated with differently directed multiple plane illuminations as shown in Figure 6. In this figure, three different illuminations with different angular directions are focused by the lens array at three different distances from the axis of each elemental lens in the focal plane of the lens array. Therefore, one elemental lens creates three PLSs. For uniform spatial P resolution, a distance between the PLSs should be equal to DPLS = L , where 3 PL is the size of the elemental lens. In this condition, the projection angle is determined to be
P θ = arctan L (2) 3f
Therefore, in the proposed method, the directional elemental images should be projected on the lens array with projection angles given by Eq. (2). At the same time, each set should be collimated to produce the PLSs with uniform spacing. In order to create an elemental image projection with (2D) scanning mirror system, these nine elemental image sets are then projected onto nine individual pieces of a composite lens, which is composed of nine pieces of a Fresnel lens. The composite lens is designed to bend the light beam passing through it by a specified angle predefined during its design. Then these nine elemental image sets are overlapped on a lens array from nine different directions. Because of these nine directional elemental image sets, each elemental lens of the lens array produces nine PLSs in the focal plane of the
Figure 8 Production of increasing number of point light sources due to multi-directional elemental image projection.
3D Display Technology
81
Figure 9 (a) Point light sources distribution produced by nine sets of multi-directional elemental image projection and (b) point light sources produced by a single set of elemental image projection.
lens array. As a result, the number of PLSs increases according to the number of multi-directional elemental image sets. Figure 8 illustrates the production of an increasing number of PLSs from a multi-directional elemental image projection. From Figure 9, it is clear that the concentration of PLSs per elemental lens is nine times higher than that of a conventional II system due to the nine directional elemental image projections. In an II display with a PLS configuration, a PLS defines one pixel of the reconstructed 3D image [21]. Hence, the resolution of an II display can be enhanced by increasing the number of PLSs per elemental lens using a time-multiplexed multi-directional elemental image projection scheme. For the pickup of integral imaging, the object is captured by an image sensor through a lens array, and the captured image is formed as an array of the images of the object, which are called elemental images. However, there are some usual issues associated with the pickup process, such as crosstalk, pseudoscopicorthoscopic conversion, and nonparallel pickup directions. A typical solution for nonparallel pickup directions was proposed by Okano et al. [27]. In this method, they inserted a lens with a large aperture field after the aerial elemental image plane that locates the imaging lens at the focal length of the field lens. For the pseudoscopic-orthoscopic image conversion, numerous solutions have also been proposed. Arai et al. proposed a method of using a gradient-index lens array which is possible to rotate each elemental image by 180 degrees [28]. Furthermore, they later extended their method to use an overlaid multiple-lens array for solving this issue [29]. Recently, a 3D video system was proposed by Arai et al. [30]. The capturing system is composed of four complementary
82
Nam Kim et al.
Figure 10 Concept of a simplified pickup method using a depth camera [31].
metal-oxide semiconductor sensors, a capturing lens and a lens array. Although they demonstrated a super hi-vision integral imaging video system, the capturing structure is so complex that the pickup process has to be precisely operated in a special environment. Implementing the optical systems in all the works mentioned above is complicated, and they need to be reduced in size. A computer-generated elemental image (CGEI) can be a solution. CGEI is one way to generate elemental images numerically without constructing a complicated optical system. Hence, some drawbacks caused by using a lens array in the pickup process can be overcome (i.e., crosstalk, nonparallel aberrations of the lenses, and unnecessary beams). Although there are many advantages in CGEI, one of the most important issues is how to accurately extract the depth information of moving objects. In practice, due to the difficulty of depth extraction, virtual and computer graphics images are usually used as the objects of CGEI. Recent progress with the pickup system makes it possible to solve these issues. Li et al. proposed a simplified II pickup method using a depth camera [31]. The 3D depth information of real objects is extracted by the depth camera, which then generates elemental images based on the depth pixel– mapping algorithm. The proposed method solved some disadvantages of using a lens array in the pickup process (such as aberration, crosstalk, pseudoscopic-orthoscopic conversion), and no post-processing is required. The concept of the proposed simplified II pickup method using a depth camera is shows in Figure 10.
3. head mounted display Head mounted display (HMD) is a device worn on the head, or as part of a helmet, it has a small optic display in front of each eye. HMDs have been
3D Display Technology
83
Figure 11 System for checking HOE. [32]
used in a wide range of virtual reality and augmented reality applications [3335]. High-performance HMDs should be easy to operate and allow simple installation on any platform. An HMD consists of two optical parts, referred to as couple-in and couple-out. The couple-in part magnifies the micro-display image, and the couple-out part projects the magnified image to the observer. Because a range of optical elements are embedded in a limited space in order to display a virtual image, reducing the size of the entire apparatus is an important issue. Holographic optical elements (HOEs) can be a solution. A reflection-type HOE with high diffraction efficiency for a waveguide-type HMD using a photopolymer has been proposed. A photopolymer is a hologram recording material that has high diffraction efficiency and low cost. Furthermore, it does not require any chemical or wet processing after recording the hologram. Therefore, the photopolymer has been widely used in several research fields, including optical elements [36, 37], holographic storage [38], and holographic display [39]. The characteristics of the photopolymer are reported elsewhere [40], but that study was limited to monochromatic. The previous HOE work was extended to full-color in this analysis. The asymmetric geometry of the reflection gratings in the photopolymer was also analyzed. The unique point in this method is the use of full-color HOEs instead of the conventional couple-in and couple-out optics in conventional HMD systems. The laminated-structure method for fabricating full-color HOE is presented. After recording the HOE, the efficiencies were measured. The actual efficiency of the display is the intensity ratio between the incident and
84
Nam Kim et al.
Figure 12 The experimental results using monochromatic HOEs for the HMD system (a): Input image (white light), (b): Output image of HMD system in which HOEs were recorded with red laser, (c): Output image of HMD system in which HOEs were recorded with green laser. (d): Output image of HMD system in which HOEs were recorded with blue laser.
diffracted beam. The efficiencies for red, green, and blue color were 55%, 54%, and 50%, respectively. Figure 11 shows the structure of the system, where two HOEs with size of 2cm×2cm are used as couple-in and couple-out optics, and the glass substrate acts as a waveguide. The object was displayed in OLED micro display and the output images are captured by a CCD camera. Figure 12 presents the experimental results using monochromatic HOEs for the HMD system. The results show that the HOEs performed their function very well in monochromatic color. The green color showed us the optimum quality (Figure 12 (c)). For red and blue, white noise appears but is acceptable. Figure 13 shows the experimental results for the full-color HOEs. The experiment process was the same as that for monochromatic HOEs, which were fabricated using the proposed two-layer composite structure. From Figure 13 (b) the output image quality was as good as expected. This means the HOE fabricated by using photopolymers can replace the conventional optical elements of the full-color waveguide-type HMD system.
3D Display Technology
85
Figure 13 Experimental results of full-color HMD system, (a) input image, and (b) output image. [39]
4. volumetric light field display A volumetric display is one of the main aspects of the 3D display first proposed by Luzy and Dupuis in 1912. Meanwhile, volumetric display approaches have been developed to present 3D volume or animated images in the given space. Usually, volumetric 3D displays directly and practically present 3D images without viewing glasses to multiple users and provide a larger viewing angle than other autostereoscopic displays. Depicted in this way, 3D images mimic real-world objects closely. Static and swept-surface types of volumetric 3D display are mostly used in military, medical, and academic applications [41-43]. The static volumetric 3D display is the most prevalent in volumetric displays. In the “Off” mode, some active elements (or voxels) that constitute the volume are transparent, but in the “On” mode, active elements is luminous and the volume patterns appear within the space of the display. Static volumetric displays usually use laser or gas for illumination. Swept-surface–type volumetric displays usually use illumination of a rapidly moving display surface, such as a spinning screen. The swept volumetric 3D display relies on the observer’s viewing distance or direction to create a 3D object from a series of different perspectives. A recently presented rotating mirror screen–type of volumetric light field display, which projects different 2D perspectives in corresponding viewing directions, can display a 3D image with a 360-degree horizontal viewing angle [44-45]. Here, the observer can see the displayed 3D image, which is a combination of tailored 2D perspectives, from any horizontal direction around the display system; however, the displayed 3D image is only in horizontal parallax, while a vertically tracked camera is required for vertical parallax, and the angular light ray sampling rate is given by the angular step of the rotating mirror, which is limited by the frame rate of the projector. Erdenebat et al. combined this type of display with an integral-floating technique (integral imaging + floating image) in order to present the natural,
86
Nam Kim et al.
full-parallax 360-degree display and a higher angular light ray sampling rate over conventional II and a conventional light field display [16]. A vertical viewing angle enhancement method was also proposed recently [46]. Here, 2D elemental images are projected by a DMD to the lens array through a collimating lens; the lens array reconstructs the 3D images from projected 2D elemental images and double floating lenses, which are configured as 4-f structure relays to the rotating convex mirror. The vertically curved mirror transfers the 3D images to the corresponding viewing directions while a motor rotates the mirror in synchronization with the DMD projector. The other functions of double floating lenses are to enhance the depth range of the displayed image and to reduce the viewing angle of the initial 3D images to match the mirror angular resolution [47]. This allows the system to tailor the 3D images in the horizontal direction to match them to the angular step of the rotating mirror. Due to the tailoring of the different 3D perspectives, monocular and binocular depth cues can be achieved, and the 3D image in the rotating mirror volume is more natural. Note that the vertical viewing angle increases as well by controlling the focal length of the rotating vertically curved mirror, as shown in Figure 14. An example of the demonstrated system is shown in Figure 15. By using a hexagonal lens array [48], the resolution of the displayed image can be improved, as shown in Figure 16.
Figure 14 Enhanced 360-degree integral-floating display: (a) schematic configuration and (b) vertical viewing angle enhancement in the simulation [47].
3D Display Technology
87
Figure 15 (a) 3D point cloud object, (b) elemental images for the corresponding viewing object and (c) 3D image on the rotating mirror [16].
Figure 16 Resolution-enhanced image using a hexagonal lens array in the volume of the rotating mirror: (a) elemental images for the corresponding viewing object illustrated in Figure 15(a); and (b), the displayed image on the rotating mirror [48].
88
Nam Kim et al.
5. holographic display The hologram was first introduced by Gabor as a wavefront reconstruction method (on-axis hologram) in 1948. The 1960s saw dramatic improvements in both the concept and the technology, improvements that vastly extended both applicability and practicality. The hologram can reconstruct an object without any conflict of depth cues, because the holographic display has the ability to provide all four visual mechanisms: binocular disparity, motion parallax, accommodation, and convergence (natural depth perception). Historically, holograms have been created on photosensitive materials that record permanent interference patterns between different waves. But researchers are actively exploring holographic materials, such as photorefractive polymers, to generate high-resolution images that can also be updated in minutes, or even milliseconds [49]. The development of devices such as camera and SLM make it easy to record and display dynamic 3D contents as a digital hologram [50]. However, for a hologram with a large size, a wide-angle view is limited by computer power and the size of the display panel. The amount of information, the time needed to process the information, and the response time of devices, all need to improve to give hologram displays a protocol for mass production. To overcome these problems, research has produced many strategies. Hahn et al. present a wide viewing–angle dynamic holographic projection using a curved array of SLMs [51]. Some authors are using eye tracking and window holograms to reduce the amount of information and the resolution of display panel [5255]. The algorithms for fast generation of holograms are also being developed as parallel calculations, GPU calculations, lookup tables, and wavefront recording planes (WRPs) [56-58]. In this section a rigorous hologram generation of an object that was captured by a four-depth camera in spherical coordination was also presented. A method for a fast computer generation hologram (CGH) using multiple WRPs is also introduced. 5.1. 360-degree computer-generated spherical hologram We demonstrate 360-degree spherical hologram generation of 3D objects. A depth camera is placed in multiple directions to capture point cloud sets for corresponding sides of the object [59]. The obtained sets of the point cloud are registered to build a single 3D model in a common coordinate system. Furthermore, the visible part for each hologram point is identified by using a hidden point removal (HPR) method in order to consider the occlusion effect correctly [60]. Some regions of the spherical hologram reconstruction images are also shown in Figure 17. The result demonstrates that the partial region of the spherical hologram synthesized by the proposed method can successfully reconstruct corresponding perspectives of the real object.
3D Display Technology
89
Figure 17 Reconstruction of the partial regions of the spherical hologram. (a) Amplitude, (b) phase, and (c) reconstruction image of the selected hologram region 59].
5.2. Fast hologram generation using multiple WRPs In this strategy to reduce the calculation time, the wavefront recording planes are placed between the object plane and the CGH plane. For a long-depth object, multiple WRPs can reduce the calculation time, in comparison with the single WRP that was introduced earlier [56-58]. In the single-WRP method, a virtual plane is placed near the object. The optical field from the object to the CGH plane is calculated on the virtual plane first, optical field in WRP is Fresnel transforms to CGH plane by fast Fourier transform (FFT). By this method, the calculation time is reduced because every object point is not directly calculated on the CGH plane but in the WRP plane. The WRP plane is close to the object, so the active area of one object point is small (instead of calculating the whole size in the CGH plane). The calculation time can be reduced even more if every point is processed in parallel. In the multiple-WRP method, several virtual planes are placed close or inside the object. By this method, the object point is closer to the WRP, so the calculation time is reduced more. But with this method, we need to use several FFTs to convert the optical field from WRPs to the CGH plane. The problem of matching the pixel pitch in the CGH plane is solved by a virtual size for the WRP. The object used in this paper was captured by a four-depth camera and has 14K points, as shown in Figure 18. At this stage of the process, the numerical experiment via CPU is finished. The calculation time is shown in Figure 19. It can be seen that multiple WRPs can reduce the time compared with a single WRP.
90
Nam Kim et al.
Figure 18 The bear object was captured by a four-depth camera.
Figure 19 Calculation time of single WRP vs. multiple WRPs.
3D Display Technology
91
Figure 20 Reconstruction results from the WRP methods.
The reconstructed object is shown in Figure 13. The results show that there is no effect on the quality of the reconstructed object from multiple WRPs.
6. conclusions 3D display is beginning to show an object with its natural properties, with full depth cue satisfying both psychological and physiological feel. Beginning with left and right fixed-2D images in stereoscopic images, we went to using color, polarization, and shutter glass in stereoscopic movies. From displays dependent on glass, we moved to autostereoscopic displays without glass. From the psychological feel alone, we progressed to both psychological and physiological feel in holographic and volumetric display. The objective of display with natural properties is the goal of future display technology; the properties of each displays type are show in Table 1. The improvements in material and computer-technology infrastructure present an opportunity for completely developed 3D display.
Acknowledgement This work was partly supported by the IT R&D program of MSIP/MOTIE/ KEIT [10039169, Development of Core Technologies for Digital Holographic
92
Nam Kim et al.
Table 1 Display type and its properties. Display type
Disadvantage
Stereoscopic Display
• Glasses, Distortion (Car board • Good compatibility for an existing effect or Puppet theater effect) TV or monitor • Eye fatigue • Easy to implement • Limitation of view (two view or • Good resolution only horizontal parallax
Integral Imaging Display
Head Mounted Display
• Low resolution • Restricted view angles • Restricted depth • Narrow viewing zone • Eye fatigue
• Real 3D image • Cheap components • Observe 3D information without glasses • Full parallax • Incoherent light source
• One view • Wearing a system in head
• Portable
• Binocular depth cue only • Horizontal-parallax only (TrackVolumetric Light ing system required in vertical Field Display parallax) • Required very high projection speed Holographic Display
Advantage
• Require coherent light • Computation power • Material
• 360 degree viewing zone • High image quality
• High resolution • Full parallax • True 3D
3-D Display and Printing System] and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (20120009225)
References [1] Nicolas S. Holliman, Neil A. Dodgson, Gregg E. Favalora, and Lachlan Pockett, “ThreeDimensional Displays: A Review and Applications Analysis,” IEEE Transactions on Broadcasting, 57, 362-371 (2011) [2] B. Lee, “Three-dimensional displays, past and present,” Physics Today, 66, 36-41 (2013). [3] M. Brain, “How 3-D Glasses Work,” http://www.howstuffworks.com/3-d-glasses2.htm [4] V. A. Nolde, “Vor- und Nachteile der beiden 3D-Varianten,” http://www.chip.de/ artikel/3D-TVs-im-Technik-Duell-Polarisation-versus-Shutter-2_51183986.html [5] C. Wheatstone, “Contributions to the Physiology of Vision. – Part the First. On some remarkable, and hitherto unobserved, Phenomena of Binocular Vision,” Philos. Trans. R. Soc. London, 128, 371-394 (1838) [6] W. D. Morgan and H. M. Lester, Stereo-Realist Manual, 1-st ed. (Morgan & Lester Publishers, 1954) [7] R. Zone, 3-D Filmmakers: Conversations with Creators of Stereoscopic Motion Pictures (The Scarecrow Filmmakers Series), 2-nd ed. (Scarecrow Press, 2005)
3D Display Technology
93
[8] W. Rollmann, “Zwei neue stereoskopische Methoden,” Ann. Phys. 166, 186-187. [9] R. Zone, Stereoscopic Cinema and the Origins of 3-D Film, 1838-1952, (The University Press of Kentucky, 2007) [10] T. L. Turner and R. F. Hellbaum, “LC shutter glasses provide 3-D display for simulated flight,” J. Inf. Disp. 2, 22-24 (1986) [11] L. Edwards, “Active Shutter 3D Technology for HDTV,” http://phys.org/news173082582. html [12] J. B. Kaiser, Make Your Own Stereo Pictures, 1-st ed. (The Macmillan Co., 1955) [13] K.-C. Kwon, Y.-S. Choi, and N. Kim, “Automatic vergence and focus control of parallel stereoscopic camera by cepstral filter,” J. Electronic Imaging, 13, 376-383 (2004) [14] K.-C. Kwon, Y.-T. Lim, N. Kim, K.-H. Yoo, J.-M. Hong, and G.-C. Park, “High Definition 3D Stereoscopic Microscope Display System for Biomedical Applications,” EURASIP J, Image and Video Processing 2010, (2010). [15] M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. de Phys. 7, 821-825 (1908). [16] M.-U. Erdenebat, G. Baasantseren, N. Kim, K.-C. Kwon, J. Byeon, K.-H. Yoo, and J.-H. Park, “Integral-floating Display with 360 Degree Horizontal Viewing Angle,” J. Opt. Soc. Korea, 16(4), 365-371 (2012). [17] H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. 15, 2059–2065 (1998). [18] J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40, 5217–5232 (2001). [19] H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral vediography by use of rotated prism sheets,” Opt. Express 15, 4814–4822 (2007). [20] J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [21] J.-H. Park, J. Kim, Y. Kim, and B. Lee, “Resolution-enhanced three-dimension/twodimension convertible display based on integral imaging,” Opt. Express 13,1875–1884 (2005). [22] M. Yamasaki, H. Sakai, T. Koike, and M. Oikawa, “Full-parallax autostereoscopic display with scalable lateral resolution using overlaid multiple projection” Journal of the SID 18, 494-500 (2010). [23] S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three- dimensional imaging along all directions without mechanical movement,” Opt. Express 11, 1346–1356 (2003). [24] Y. Kim, J.-H. Park, S.-W. Min, S. Jung, H. Choi, and B. Lee, “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44(4), 546–552 (2005). [25] G. Baasantseren, J.-H. Park, K.-C. Kwon, and N. Kim, “Viewing angle enhanced integral imaging display using two elemental image masks,” Opt. Express, 17(16), 14405-14417 (2009). [26] M. A. Alam, G. Baasantseren, M.-U. Erdenebat, N. Kim, and J.-H. Park, “Resolution enhancement of integral-imaging three-dimensional display using directional elemental image projection”, Journal of the SID 20(4), 421-427 (2012). [27] F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [28] J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [29] J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett., 33, 279–281 (2008). [30] J. Arai, M. Kawakita, T. Yamashita, H. Sasaki, M. Miura, H. Hiura, M. Okui, and F. Okano, “Integral three-dimensional television with video system using pixel-offset method”, Opt. Express 21(3), 3474-3485 (2013).
94
Nam Kim et al.
[31] G. Li, K.-C. Kwon, G.-H. Shin, J-S. Jeong, K.-H. Yoo, and N. Kim, “Simplified Integral Imaging Pickup Method for Real Objects Using a Depth Camera” Journal of the Optical Society of Korea 16(4), 381-385 (2012). [32] Jing-Ai Piao, Gang Li, Mei-Lan Piao, and Nam Kim, “Full Color Holographic Optical Element Fabrication for Waveguide-type Head Mounted Display Using Photopolymer,” Will be publish J. Opt. Soc. Kor (2013) [33] J. E. Melzer and K. Moffitt, Head mounted displays: designing for the user (McGraw Hill, New York, 1997) [34] M. G. Tomilin, “Head-mounted displays,” J. Opt. Technol. 66, 528-533 (1999). [35] H. Hua, A. Girardot, C. Gao, and J. P. Rolland, “Engineering of head-mounted projective displays,” Appl. Opt. 39, 3814-3824 (2000). [36] M. L. Piao, N. Kim, and J. H. Park, “Phase contrast projection display using photopolymer,” J. Opt. Soc. Kor. 12, 319-325 (2008). [37] K. Y. Lee, S. H. Jeung, B. M. Cho, and N. Kim, “Photopolymer-based surface-normal input/output volume holographic grating coupler for 1550-nm optical wavelength,” J. Opt. Soc. Kor. 16, 17-21 (2012). [38] E. Fernandez, A. Marquez, S. Gallego, R. Fuentes, C. García, and I. Pascual, “Hybrid ternary modulation applied to multiplexing holograms in photopolymers for data page storage,” J. Lightwave Technol. 28, 776-783 (2010). [39] S. H. Stevenson, M. L. Armstrong, P. J. O’Connor, and D. F. Tipton, “Advances in photopolymer films for display holography,” Proc. SPIE 2333, 60-70 (1995). [40] N. Kim and E. S. Hwang, “Analysis of Optical Properties with Photopolymers for Holographic Application.” J. Opt. Soc. Kor. 10, 1-10 (2006). [41] B. Blundell and A. Schwarz, Volumetric Three-Dimensional Display Systems, (Wiley, 2000). [42] R. R. Hainich and Oliver Bimber, Displays: Fundamentals and Applications, 1-st ed. (A K Peters / CRC Press, 2011). [43] T.-C. Poon, Digital Holography and Three-Dimensional Display: Principle and Applications, (Springer, 2006). [44] A. Jones, I McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an Interactive 360° Light Field Display,” Proc. ACM SIGGRAPH 26, 1-10 (2007). [45] A. Jones, M. Lang, X. Yu, J. Busch, I. McDowall, M. Bolas, and P. Debevec, “Achieving Eye Contact in a One-to-Many 3D Video Teleconferencing System,” Proc. ACM SIGGRAPH 28, 61:1-64:8 (2009). [46] M.-U. Erdenebat, G. Baasantseren, J.-H. Park, N. Kim, K.-C. Kwon, Y.-H. Jang, and K.-H. Yoo, “Full-parallax 360 degrees horizontal viewing integral imaging using anamorphic optics,” Proc. SPIE 7863, 7863OU (2011). [47] G. Baasantseren, J.-H. Park, M.-U. Erdenebat, S.-W. Seo, and N. Kim, “Integral floatingimage display using two lenses with reduces distortion and enhanced depth,” J. Soc. Inf. Disp. 18, 519-526 (2010). [48] M.-U. Erdenebat, J.-H. Park, K.-C. Kwon, K.-H. Yoo, and N. Kim, “Resolution Enhanced Light Field Integral-Floating 360 Degree Display Using Hexagonal Lens Array,” presented at IMID 2012, Daegu, Korea, 449-450 (2012). [49] P.-A. Blanche et al., “Holographic three-dimensional telepresence using large-area photorefractive polymer”, Nature 468, 80–83 (2010) [50] L. Onural, F. Yaras and H. Kang, “Digital holographic three-dimensional video displays,” Proc. IEEE 99(4), 576–589 (2011). [51] J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators”, Opt. Express 16, 1237212386 (2008) [52] S. Reichelt, H. Sahm, N. Leister, and A. Schwerdtner, “Capabilities of diffractive optical elements for real-time holographic displays”, Proc. SPIE 6912, 69120-69130 (2008) [53] S. Reichelt, R. Haussler, G. Futterer, and N. Leister, “Depth cues in human visual perception and their realization in 3D displays”, Proc. SPIE 7690 (2010 )
3D Display Technology
95
[54] N. Leister, A. Schwerdtner, G. Fütterer, S. Buschbeck, J.-C. Olaya, and S. Flon,”Full-color interactive holographic projection system for large 3D scene reconstruction”, Proc. SPIE 6911 (2008) [55] E. Zschau, R. Missbach, A. Schwerdtner, and H. Stolle, “Generation, encoding and presentation of content on holographic displays in real time,” Proc. SPIE, 7690, 76900E (2010) [56] T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane”, Opt. Letters 34, 3133-3135 (2009) [57] J. Weng et al. “Generation of real-time large computer generated hologram using wavefront recording method”, Opt. Express 20, 4018- 4023 (2012) [58] P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points”, Opt. Express 19, 15205-15211 (2011) [59] L. Gang, A.-H. Phan, N. Kim, J.-H. Park, “Synthesis of computer-generated spherical hologram of real object with 360° field of view using a depth camera,” Appl. Opt., 52, 3567-3575 (2013) [60] S. Katz, A. Tal, and R. Basri, “Direct Visibility of Point Sets,” ACM Trans. Graph. 26, 2-11 (2007).