Topical Editor: Prof. Dr. Georgios A. Triantafyllidis, 3D Res. 3, 01(2012)5 10.1007/3DRes.01(2012)5
3DR REVIEW
w
Application-adapted Mobile 3D Video Coding and Streaming - A Survey
Yanwei Liu • Song Ci • Hui Tang • Yun Ye
Received: 28 July 2011 / Revised: 05 October 2011 /Accepted: 20 October 2011 © 3D Research Center, Kwangwoon University and Springer 2012
Abstract * 3D video technologies have been gradually matured to be moved into mobile platforms. In the mobile environments, the specific characteristics of wireless network and mobile device present great challenges for 3D video coding and streaming. The application-adapted mobile 3D video coding and streaming technologies are urgently needed. Based on the mobile 3D video application framework, this paper reviews the state-of-the-art technologies of mobile 3D video coding and streaming. Specifically, the mobile 3D video formats and the corresponding coding methods are firstly reviewed and then the streaming adaptation technologies including 3D video transcoding, 3D video rate control and cross-layer optimized 3D video streaming are surveyed. Keywords Mobile 3D video, Coding, Streaming
1. Introduction The advances in digital video processing make 3D movie in cinema become a completely exciting experience1. This kind of 3D experience has attracted a lot of research works to deploy 3D video for the home applications. With the 3D content creation, distribution, and visualization technologies coming to maturity, 3D video is gradually going into home. The glasses-enabled 3DTV in home is the typical application form of 3D video home entertainments. To further improve the 3D perception quality, the glass-free auto-stereoscopic display has been developed as an alternative of the next generation 3DTV in home application. 1
Yanwei Liu( ) • 2Song Ci • 1Hui Tang • 3Yun Ye Institute of Acoustics, Chinese Academy of Sciences 2 Institute of Acoustics, Chinese Academy of Sciences, and University of Nebraska-Lincoln 3 University of Nebraska-Lincoln email:
[email protected] 1
Due to the limitation of physical environments, the 3DTV home service makes people only perceive the 3D effects in a non-movable room. In recent years, the great advancements have been achieved in mobile 3D display so that 3D video application will be extended to the handheld device to provide the 3D visual experience everywhere and anytime2. By utilizing mobile 3D display devices, people can enjoy the exciting 3D video program in the moving car, at walking or at the working office. Mobile 3D video is a new application form of the general 3D video deployed over the wireless communication, and its application will face the challenges of the unstable communication and diverse end users. Therefore, in the development of mobile 3D video system, the specific application characteristics should be taken into account. In the past several years, there have been many EU funded projects focusing on developing the end-to-end mobile 3D video prototype system3, 4. The mobile 3DTV project3 aims at developing a digital mobile 3D video system transmitted over digital video broadcastinghandheld (DVB-H). For mobile 3D video coding and streaming, the project puts emphasize on the stereoscopic video coding and error-resilient broadcasting. The 3D phone project4 makes an effort to let 3D mobile phone support the 3D multimedia application. These projects have achieved many progresses in mobile 3D video technologies5. As the research results show that, the complete end-to-end mobile 3D video system can be composed of 3D capturing, 3D coding, 3D streaming, and mobile 3D display. The technologies for 3D capturing and mobile 3D display locate in the area of optic and semiconductor, and it involves a lot of the optical processing technologies for disparity rendering in the hardware device. Comparably, the mobile 3D video coding and streaming are the key technologies related to the application environments which affect the end-to-end 3D perception quality. Due to the space limitation, this paper only surveys the state-of-the-art technologies for mobile 3D
2
3D Res. 3, 01(2012)5
coding and streaming viewpoint.
from the
application-adapted
Fig.1 Mobile 3D video streaming framework
The mobile 3D video streaming framework is shown in Fig. 1. The captured stereoscopic video or the generated video plus depth based stereoscopic 3D video will be firstly encoded for distribution. The original 3D video may be the high-definition (HD) formats distributed over broadcasting or the wire-line network. Therefore, there are two possible transmission roads which can deliver the 3D video to the mobile end user. The first is that the captured 3D HD video is processed for adapting to the mobile 3D application and then directly be transmitted through the wireless network. For this type of application, mobile 3D video coding optimization and the corresponding streaming adaptation technologies, such as the 3D video rate control and crosslayer streaming optimization can promote the end-to-end 3D visual quality. The second route for mobile 3D video streaming is that, the 3D HD contents are firstly transmitted through the Internet and are then delivered over the wireless network to reach the mobile end user. In this route, the transcoding gateway is needed to transcode the 3D HD video into the mobile 3D video which can adapt to both the wireless transmission and the end user's 3D display. Thus, the 3D video transcoding with considering the mobile application characteristics is very important. The rest of the paper is organized as follows. Section 2 reviews the mobile 3D video formats and their corresponding coding methods. In Section 3, we survey mobile 3D video streaming technologies, including 3D video transcoding, 3D video rate control and cross-layer optimization for 3D video streaming. Finally, concluding remarks are provided in Section 4.
2. Mobile 3D Video Formats and Coding Since 3D video lives on the boundaries of disciplines of computer vision, graphics and signal processing, there are many types of formats to realize it. However, due to the limited processing capabilities of mobile 3D devices (stereoscopic or auto-stereoscopic display), only the simple stereoscopic 3D video formats are currently practical to mobile applications. The stereoscopic video contains two captured views which form the stereo vision mimicking the
human's binocular (two eyes) system. This format can be directly coded with H.264/AVC simulcast coding or H.264/MVC (multiview video coding) stereo high profile6. Based on the binocular vision theory, the asymmetrical coding of stereoscopic video, in which one of the views is encoded with lower quality than the other, can also save the coding bit-rate7, 8. However, if the quality of one view is significantly lower than another, the asymmetrical coding may sometime result in the imperceptible stereoscopic video quality. Stereoscopic video can provide the 3D perception but it can not provide the parallax-adjustable 3D effects. Comparably, as an alternative of the stereoscopic video, video plus depth representation can provide the parallaxshiftable 3D perception in a limited range. The per-pixel depth contains the disparity information which can assist the synthesis of another virtual view. Since the view synthesis quality highly depends on the accuracy of geometry information provided by depth, the accuracy of depth is very important. Currently, depth-based formats can provide the acceptable visual perception though the depth is not very accurate. The depth map can be directly encoded with the H.264/AVC9, and ISO/IEC 23002-3 (also referred to as MPEG-C Part 3) has specified the representation of auxiliary video and supplemental information. In particular, it enables signaling for depth map streams to support 3D video applications. In 3D video application, depth map is a kind of range data to affiliate the virtual view synthesis and it is not directly used for displaying, so that it is not very sound to compress it with rate-distortion optimized method. Because the final goal of transmitting depth map is to obtain the high quality virtual view synthesis, view synthesis quality is the suitable metric to evaluate depth compression. With considering the virtual view quality, the video and depth can be encoded with different bit-rates. The video and depth compression with different bit-rate overheads can lead to different synthesis qualities of the virtual view10. By quantitative analyzing the effect of depth and video compression on the virtual view quality, Liu et al.11 proposed the joint video/depth rate allocation method based on view synthesis distortion model to optimize the 3D video coding efficiency. In the current stage, the depth capture or depth estimation is not very satisfied so that some errors exist in the obtained depth map. The compression of depth map also induces the depth error. In the view synthesis, the occlusion and illumination difference between views can also introduce the virtual view distortion. Therefore, the depth compression has only one part of influence on the virtual view quality. If the original depth is enough accurate, the depth map compression can be regulated to control the virtual view quality12. Since view distortion is suitable to evaluate the depth compression, the depth map compression and video compression can be integrated into one united coding scheme. The virtual view distortion and video/depth rate jointly optimized method has been tried to integrate video compression with depth map compression13. The 3D video rate distortion model11 is also implicitly proposed to optimize the virtual view quality. The future 3D video compression scheme is possible to consider the redundancies between video and depth, as well as the temporal redundancy.
3D Res. 3, 01(2012)5
3
To spur the practical application of 3D video on the handheld devices, reducing the encoding complexity and improving the encoding speed may be considered in the current and future research activities. With the improvement of intelligent mobile device's computing ability, the 3D video encoding algorithm can be designed to match the advanced mobile computing devices for high encoding speed. And also, the complexity of encoding algorithm can be reduced to meet the specific device power limitation. The complexity-aware mobile 3D video coding is proposed in14, which exploits the run-time trade-off between complexity and video quality. The similar work, such as the stereoscopic encoder optimization for mobile application is also performed in15. The trade-off between rate, distortion, and complexity is optimized in H.264/MVC stereo high profile, and consequently the encoder complexity can be controlled to match the power state of mobile device to guarantee the longer 3D service time.
3. Mobile 3D Video Streaming The current mobile 3D video coding methods can greatly reduce the transmission bit-rate and it has also provided the superior broadcasting performance for mobile applications16, 17 . However, due to the diverse users and unstable links in communication, it can not yet perfectly adapt to the wireless transmission environments in current stage. Thus, the mobile 3D streaming should integrate the source coding
characteristics with the transmission behaviors, and then provide the adaptive streaming strategy. Taking into account the specific characteristics of the heterogeneous network, 3D video transcoding, 3D video rate control and the crosslayer optimization for mobile 3D video streaming are reviewed in this section.
3.1. Mobile 3D Video Transcoding The 3D video home applications are mostly with the highdefinition (HD) formats delivered by the ways of terrestrial broadcasting, cable, satellite and IPTV18. Though the 3D HD video provides the vivid visual effects, it requires much more transmission bandwidth. When 3D video is distributed over wireless network, the rate reduction and down-sample transcoding must be utilized to make the 3D video stream adapt to the wireless channel and the mobile display. To provide the processing compatibilities with non-3D video decoding devices, Liu et al.19 first proposed a 3D video transcoding scheme for the virtual view. By utilizing the inter-view motion correlation, the proposed transcoder re-encodes the virtual view with motion refinement, and then generates a single view stream that can be decoded with H.264/AVC. In mobile application, when the channel resource is not enough to sustain the 3D video service, the gateway can generate the 2D video stream with appropriate viewpoint for the end user using the virtual view transcoding
Fig. 2 Error-resilient video plus depth based mobile 3D video transcoding
For the 3D video streaming transited from the wire-line network to the wireless network, the rate-adaptive and error-resilient transcoding is necessary. For video plus depth based 3D video streaming, the transcoding with errorresilient video/depth rate allocation can be used to promote the end-to-end stereoscopic 3D video quality over unstable channel with limited bandwidth. Based on the feedback of channel and device information, the transcoder can re-
encode the down-scaled video and depth to meet the new wireless transmission rate requirement. Therefore, the 3D video transcoding combining the encoding server and the transcoder can be designed, as shown in Fig. 2. The video encoding server and the transcoder are originally independent components of the video streaming system. Currently, the encoding is integrated with the transcoding to transfer a part of transcoding computations to the encoding
4
server. Especially, the heavy computation for video/depth rate re-allocation originally needed to be performed at the transcoder can be moved to the high performance encoding server. The encoding server compresses the 3D HD contents, and concurrently generates the Rate-QP-PLR (packet loss rate) table for the transcoding. For adapting to the mobile display, the video and depth need to be appropriately downsampled. The server re-encodes the down-sampled video and depth in a way of multiple encodings to perform the video/depth rate allocation and generates the Rate-QP-PLR
3D Res. 3, 01(2012)5
table. The Rate-QP-PLR table contains the specific information of the video QP and depth QP under different levels of rate and PLR. Via looking up the Rate-QP-PLR table, the transcoder transcodes the compressed video and depth streams with the appropriate QP pairs according to the PLR and channel information returned from the actual transmission channel. Though the encoding server does not know the actual channel behaviors in advance, the random packet loss previously simulated in the encoding server can reflect the actual influence of transmission error on the video quality in the statistical sense.
Fig.3 VBR rate control for mobile 3D video streaming Rt1 and Rt2 are rate constraints at two continue time instants, and RThreshold is the threshold value of the changing range of channel bandwidth
3.2. Mobile 3D Video Rate Control Wireless channel conditions are often non-stationary due to the mobile receivers, so that the available channel bandwidth changes over time. To meet the limitation of wireless channel bandwidth, the video source coding rate must be controlled. Especially, varying bit-rate (VBR) control is very necessary for adapting the channel bandwidth fluctuation. To adapt to the wireless video communication, a lot of rate control works were proposed20. In the 3D video applications, the constant bit-rate control algorithms are also proposed21, 22, 23. For video plus depth based representation, the varying bit-rate control is appropriate for mobile 3D video streaming24. The rate control algorithm using three-pass encodings is shown in Fig. 3. With the collected rate-distortion (RD) information and the reconstructed video and depth in the first and second pass offline encodings, the encoder can establish the virtual view quality model (VVQM) to assess the right virtual view qualities under different rate combinations of video and depth. In the third pass encoding, the encoder
utilizes VVQM to find the optimal video/depth target rate allocation, and then independently controls the video and depth rates with the rate-quantization (RQ) model. In the course of rate control, the video/depth rate allocation will be adjusted in real time to meet the varying channel bandwidth constraint. This kind of VBR rate control can compensate the bit-rate fluctuation of the channel to a certain degree. For the wireless video transmission, the transmission congestion often results in packet loss. The congestion control regulates the sending rate to avoid a congestion collapse of the network. TCP-friendly Rate Control (TFRC) is an equation-based congestion control technique originally servicing for Internet and it has been extended to the wireless network. To provide the optimal video streaming performance, the source coding rate control in the application layer can be integrated with the TFRC congestion control25, 26 in the transport layer. For mobile 3D video streaming, TFRC can also be used to provide the target rate information for 3D video coding to avoid the congestion-induced packet loss, in the meantime of achieving the maximize 3D video perception quality. This can be investigated in the future research activities.
3D Res. 3, 01(2012)5
3.3. Cross-layer optimized Mobile 3D Video Transmission Due to the limited adaptation to dynamic wireless link conditions and interaction between layers, traditional layerseparated protocols and solutions fail to provide QoS (Quality of Service) for mobile video streaming. Crosslayer solution jointly tunes the parameters of different layers in the protocol stack to optimize the resource allocation for providing the maximal service quality.
Fig. 4 Cross-layer optimization scheme for wireless video streaming
For the wireless communication, the transmission often companies the high bit error rates during the fading period. The separate source coding error-resilience and channel protection can not optimally control the transmission error. The joint source and channel coding is an effective method which can optimally control the transmission error27. The joint source and channel coding adaptively control the error-resilience strength in the source coding side according to the error information fed back from transmission channel. To further strengthen the error control efficiency, the crosslayer design techniques for jointly adapting the coding and transmission techniques across all network layers have demonstrated considerable performance gains for multimedia applications over the wireless networks. Crosslayer optimized video streaming not only control the data packet size in the application layer to adapt to the channel conditions, but also adaptively set the channel coding rate and modulation mode in the physical layer to achieve the maximized end-to-end video quality28. For example, the video streaming optimization scheme across the application layer and physical layer is shown in Fig. 4. The source coding (such as H.264/AVC) QP in the application layer can be simultaneously optimized with the adaptive modulation and coding (AMC) mode in the physical layer by minimizing the expected end-to-end video distortion. Generally, the quantization controlled by QP in video coding has a contribution to the video distortion. And also, the packet size controlled by QP also results in different transmission delays under the same data bit rate. The bit error rate and data bit rate determined by AMC scheme in the physical layer also contribute to the video distortion via packet loss. Given the packet delay constraint for real-time streaming, all these factors can be optimally controlled by minimizing the expected video distortion. The specific details can refer to29. For mobile 3D video streaming, the cross-layer optimization with prioritized 3D video distribution over IEEE 802.11e is first proposed in30. According to different error sensitivities of video and depth for the final perceptual quality, the streaming server allocates higher priority for
5
coded color video stream than for the depth map stream in order to improve the perceived quality of 2D/3D video. This approach integrates the 3D video characteristics in the application layer with the enhanced distributed channel access (EDCA) technology in media access control (MAC) layer in 802.11e to achieve the optimal end-to-end 3D perceptual quality. The further cross-layer optimized 3D video transmission containing the video transcoding is also investigated31. The object based priority encoding is designed in the application layer to adapt to the traffic classes with different priority levels over wireless local area network (WLAN). Besides the priority optimization across the application layer and the MAC layer, the mobile 3D video transmission can jointly consider the specific source coding characteristics and the natures of different other protocol stack layers to exploit the end user's quality of experience (QoE) from a global optimization view. For example, the multiple description 3D video coding32 is used to improve the quality of the 3D QoE at the receiver by cognitively configuring the source coding strategy according to the state of the transmission channel and the characteristics of the coded signal. Many other characteristics are also possible to be incorporated into the cross-layer solutions, such as the scalable video coding and multiview 3D video coding in application layer, MIMO (multiple-input and multipleoutput) in physical layer. All these characteristics can optimally be cross-layer configured to promote the 3D video transmission performance.
4. Concluding Remarks A survey of the state-of-the-art technologies of mobile 3D video coding and streaming has been presented. The emphasis of the survey is given to the adaptation technologies of coding and streaming in mobile 3D video applications. The technologies of 3D video transcoding, 3D video rate control and cross-layer optimized 3D video streaming are summarized. Currently, the solutions of 3D video coding and streaming have been available. However, due to the particular characteristics of mobile environment, there exist the great challenges for obtaining the high-quality mobile 3D quality. To deploy the practical 3D video service on the mobile platform, the challenges mainly involve two aspects. One is that the current mobile devices can not process the complicated 3D video coding and rendering to sustain the enough long service time with limited battery power. Thus, mobile 3D video coding should consider reducing the encoding complexity. The 3D video encoding optimization technologies for mobile application shall be the future urgent research work. Another obstacle for mobile 3D video coding and streaming is the variance between the transmission condition and received quality. As we all known, 3D video demands more transmission channel bandwidth than 2D video, and more over, the wireless channel is unstable, fluctuated and prone to the packet loss. The current wireless network can not provide the enough and perfect bandwidth for multiple users so that the receiver only obtains the poor 3D perception quality. Therefore, under the limited transmission resources, to provide a channel-adaptive streaming scenario with maximizing the
6
end-to-end 3D video quality will be a long-term research work. Along with the mobile 3D video coding and streaming, the 3D QoE evaluation is a central component in the mobile 3D video applications. The current 2D video QoE metric is not enough suitable for 3D video service, and the 3D QoE metric need to be defined. 3D QoE is a multi-dimensional result which reflects the overall experience of the end-user accessing and using the provided service. As the key part of the 3D service, 3D video coding and streaming also contribute to 3D QoE. Therefore, mapping the coding and streaming to QoE and then seeking the QoE-driven or QoEoriented 3D video coding and streaming solution is a very promising research direction.
Acknowledgment The work was supported in part by Important National Science & Technology Specific Project under contract 2012ZX03003006-004, NSFC under grant Nos.60972083 and 61102077, and Initial Fund of President Award of Chinese Academy of Sciences under contract Y129081621.
References 1.
B. Mendiburu (2008), 3D Movie Making-Stereoscopic Digital Cinema from Script to Screen. New York: Elsevier, 2008978-0-240-81137-6 2. A. Gotchev et al (2009), Mobile 3DTV Technology Demonstrator Based On OMAP3430", 16th International Conference on Digital Signal Processing (DSP 2009) 3. http://sp.cs.tut.fi/mobile3dtv/ 4. http://the3dphone.eu/ 5. A. Gotchev, G. B. Akar, T. Gapin, D. Strohmeier, A. Boev (2011), Three-Dimensional Media for Mobile Devices, Proceedings of IEEE, 99 (4): 708-741 6. Mobile3DTV Final Public Summary, Technical report (2011), Mobile 3DTV Content Delivery Optimization over DVB-H System. Available online at: http://sp.cs.tut.fi/mobile3dtv/results/summaries/ 7. C. Fehn, P. Kauff, S. Cho, H. Kwon, N. Hur, J. Kim (2007), Asymmetric coding of stereoscopic video for transmission over T-DMB, Proc. 3DTV-CON 2007 8. H. Brust, A. Smolic, K. Müller, G. Tech, T. Wiegand (2009), Mixed Resolution Coding of Stereoscopic Video for Mobile Devices, Proc. 3DTVCON 2009 9. P. Merkle, Y. Wang, K. Müller, A. Smolic, T. Wiegand (2009), Video plus Depth Compression for Mobile 3D Services, Proc. of IEEE 3DTV Conference 2009 10. P. Merkle, Y. Morvan, A. Smolic, D. Farin, K. Müller, P. H.N. de With, T. Wiegand (2009), The Effect of Depth Compression on Multiview Rendering Quality, Signal Processing: Image Communication, 24(1): 73-88 11. Y. Liu, Q. Huang, S. Ma, D. Zhao, W. Gao (2009), Joint video/depth rate allocation for 3D video coding based on view synthesis distortion model, Signal Processing: Image Communication, 24 (8): 666-681 12. Y. Liu, S. Ci, H. Tang (2010), View Synthesis Error Analysis for Selecting the Optimal QP of Depth Map Coding in 3D Video Application, 28th Picture Coding Symposium (PCS 2010)
3D Res. 3, 01(2012)5 13. W.S. Kim, A. Ortega, P. Lai, D. Tian, C. Gomila (2009), Depth map distortion analysis for view rendering and depth coding, Proc. of IEEE Int. Conf. Image Processing 14. M. Shafique, B. Zatt, S. Bampi, J. Henkel (2010), PowerAware Complexity-Scalable Multiview Video Coding for Mobile Devices, 28th Picture Coding Symposium (PCS 2010) 15. P. Merkle, J. B. Singla, K. Müller, T. Wiegand (2011), Stereo Video Encoder Optimization for Mobile Applications, Proc. 3DTVCON 2011 16. M.O. Bici, D. Bugdayci, G.B. Akar, A.P. Gotchev (2010), Mobile 3D video broadcast, Proc. of ICIP, 2010, pp.23972400 17. N. Hur, H. Lee, G. Lee, S. Lee, A. Gotchev, S. Park (2011), 3DTV Broadcasting and Distribution Systems, IEEE Trans. on Broadcasting, 57 (2): 395-406 18. W. Zou (2009), An Overview for Developing End-to-End Standards for 3-D TV in the Home, Information Display, 25(7):14-19 19. S. Liu, C. W. Chen (2010), 3D Video Transcoding for Virtual Views, ACM Multimedia, pp. 795-798 20. C.-Y. Hsu, A. Ortega, M. Khansari (1999), Rate control for robust video transmission over burst-error wireless channels, IEEE J. Select. Areas Commun. 17 (5): 756-773 21. J. E. Lim, J. Kim, K.-N. Ngan, K. Sohn (2003), Advanced rate control technologies for 3D-HDTV, IEEE Trans. Consum. Electron., 49(4): 1498-1507 22. B. Kamolrat, W. A. C. Fernando, M. Mrak (2008), Rate controlling for color and depth 3D video coding, Proc. SPIE: Appl. Digital Image Process. XXXI 23. Y. Liu, Q. Huang, S. Ma, D. Zhao, W. Gao, S. Ci, H. Tang (2011), A Novel Rate Control Technique for Multiview Video plus Depth based 3D Video Coding, IEEE Transactions on Broadcasting, 57(2): 562-571 24. Y. Liu, G. Peng, Y. Hu, S. Ci, H. Tang (2010), A Multi-pass VBR Rate Control Method for Video plus Depth based Mobile 3D Video Coding, the 2010 Pacific-Rim Conference on Multimedia 25. P. Zhu, W. Zeng, C. Li (2007), Joint Design of Source Rate Control and QoS-Aware Congestion Control for Video Streaming Over the Internet, IEEE Transactions on Multimedia, 9(2): 366-376 26. E. Tan, J. Chen, S. Ardon, E. Lochin (2008), Video TFRC, IEEE International Conference on Communications (ICC 2008) 27. B. Kamolrat, W.A.C. Fernando, M. Mrak, A. Kondoz (2008), Joint source and channel coding for 3D video with depth image based rendering, IEEE Trans. on Consumer Electro., 54(2): 887-894 28. H. Luo, S. Ci, D. Wu, J. Wu, H. Tang (2010), QualityDriven Cross-Layer Optimized Video Delivery over LTE, IEEE Communications Magazine, 48(4): 102-109 29. G. Peng, Y. Liu, Y. Hu, S. Ci, H. Tang (2011), End-to-End Distortion Optimized Error Control for Real-time Wireless Video Streaming, 2011 IEEE international workshop on multimedia signal processing 30. C. Hewage, S. Nasir, S. Worrall, M.G. Martini (2010), Prioritized 3D video distribution over IEEE 802.11e, Proc. of Future Network & Mobile Summit 2010 31. S. Nasir, C.T.E.R. Hewage, Z. Ahmad, M. Mrak, S. Worrall, A. Kondoz (2010), Quality-driven coding and prioritization of 3D video over wireless networks, in High-quality visual experience, Ed. Marta Mrak, Mislav Grgic, Murat Kunt, Springer 32. S. Milani, G. Calvagno (2010), A Cognitive Approach for Effective Coding and Transmission of 3D Video, Proc. of ACM Multimedia 2010 pp. 581-590