Evaluating two implementations of the component ... - Springer Link

3 downloads 0 Views 693KB Size Report
Feb 16, 2011 - Tiago Henrique Trojahn & Juliano Lucas Gonçalves &. Júlio Carlos Balzano Mattos & Luciano Volcan Agostini &. Leomar Soares Da Rosa Jr.
Multimed Tools Appl (2012) 57:373–392 DOI 10.1007/s11042-011-0753-x

Evaluating two implementations of the component responsible for decoding video and audio in the Brazilian digital TV middleware Tiago Henrique Trojahn & Juliano Lucas Gonçalves & Júlio Carlos Balzano Mattos & Luciano Volcan Agostini & Leomar Soares Da Rosa Jr

Published online: 16 February 2011 # Springer Science+Business Media, LLC 2011

Abstract The project Ginga Code Development Network (GingaCDN) was created to implement a reference version of Ginga, the Brazilian Digital Television System (SBTVD) middleware, supporting the declarative GingaNCL and the procedural GingaJ environments in the same middleware. To reach that, a common core is being implemented, named Ginga Common Core (GingaCC). One of the main components of the GingaCC is the one responsible to decode audio and video streams, called Media Processing. In this work, two Media Processing implementations using libVLC and Xine graphical libraries are investigated. Performance tests and results of both Media Processing implementations running in two different desktop architectures are discussed. Keywords Media processing . Middleware . libVLC library . Xine library

1 Introduction In middle of 2003, the Brazilian Digital Television System (SBTVD) was created by the government decree 4901 [2] to encourage the construction of a national Digital TV standard. In 2006, the Japanese standard, the Integrated Services Digital BroadcastingTerrestrial (ISDB-T), was considered that most closely met national needs in a study [19] published by Brazilian Society for Television Engineering (SET) and the Brazilian T. H. Trojahn : J. L. Gonçalves (*) : J. C. B. Mattos : L. V. Agostini : L. S. Da Rosa Jr Technological Development Center, Federal University of Pelotas, Pelotas, Brazil e-mail: [email protected] T. H. Trojahn e-mail: [email protected] J. C. B. Mattos e-mail: [email protected] L. V. Agostini e-mail: [email protected] L. S. Da Rosa Jr e-mail: [email protected]

374

Multimed Tools Appl (2012) 57:373–392

Association of Radio and Television Broadcasters (ABERT). Thus, the ISDB-T was officially adopted by SBTVD in the government decree 5820 [3]. Also, in 2006, Federal University of Paraiba (UFPB) developed a middleware, named FlexTV [13], to follows the specifications of the European Globally Executable MHP (GEM) [4]. From the FlexTV, the OpenGinga [15] middleware was conceived. This middleware has the capabilities to run some programs based in JavaDTV [11], a Java package especially developed by Sun Microsystems based in the JavaTV but without royalties’ problems. That effort was recently named GingaJ [6]. In 2001, Ferreira [5] proposed the creation of a low-cost declarative middleware. Moreno [16] implemented the Maestro middleware to support the Nested Context Language (NCL). The NCL is a declarative language developed by PUC-Rio focused in the media synchronism and, highlighted by Ferreira, known by its low computer cost. In 2007, the name Maestro was changed to GingaNCL and was officially adopted as the declarative environment of SBTVD [20]. The GingaNCL was recommended, in late 2009, by International Telecommunication Union (ITU) for use in Internet Protocol Television (IPTV) systems [10]. This separated development reflected in a disjoint between the declarative and the procedural environment. Applications developers have to choose one of the environments and cannot use the features of the other paradigm, making the software restrict to only one middleware implementation. Finally, in February 2009, it was created the GingaCDN Project, financed by the RNP Brazilian Funding Agency. The aim of GingaCDN is create a distributed and collaborative network of software development for Digital TV contributing to the creation of a single, modular and integrated reference implementation of Ginga, unifying the GingaNCL and the GingaJ environment. The middleware has being developed using a component-based approach, introducing a common core, named Ginga Common Core (GingaCC), to provide support for the declarative GingaNCL and the procedural GingaJ sub-systems. The software development process is being done by 13 Brazilian universities, coordinated by UFPB and PUC-Rio, where each university is responsible for the design of a pre-determined number of components. This effort includes the creation of a tool to make custom versions of Ginga, where a developer can combine different components, with different features, to build a custom Ginga middleware. Every GingaCC components are being developed using the C++ language and using a component model named FlexCM [7] to facilitate the integration of all components. The focus of this work is the Media Processing component, which is one of the main components of the GingaCC. Its development is under own responsibility. To explore different solution for it, we have developed two components with the same behavior. The first solution is based on the libVLC library [14], while the second one is based on the Xine library [22]. This work presents a comparison between both solutions. The remaining of this paper is organized as follows: The Media Processing component is described in the Section 2. The libraries used in the implementation, the libVLC, and Xine, as well as the FlexCM component model are described in the subsections 2.1, 2.2 and 2.3, respectively. The Section 3 presents the Experimental Results and Section 4 presents the conclusions and future works.

2 The Media Processing The GingaCC is responsible to provide support to GingaNCL and GingaJ applications, through a set of fundamental functionalities. One of these basic functionalities is the video

Multimed Tools Appl (2012) 57:373–392

375

and audio stream decoding. In the GingaCC, the component responsible for this function, the video and audio decoding, is the component named Media Processing. The Media Processing current version have a set of methods which perform the basic functionalities, like video and audio decoding, and some others directly related. A high-level description of these functionalities is described as follow:

& & & & & & &

Video stream decoding, including, but not limited to, the H264/Advanced Video Coding (AVC), format used by SBTVD standard; Audio stream decoding, using the Advanced Audio Coding (AAC) defined in MPEG-4 Part 3 and used by the SBTVD standard; Basic video and audio streams flow control (play, pause and stop) methods. Load, select and show multiple subtitle formats, like SubRip (SRT), and Advanced Substation Alpha (ASS); Provide a set of video and audio stream information, like total length, actual reproduction time, resolution, aspect ratio and Frame Rate per Second (FPS); Support to get screenshots and save it in a pre-determined path, with a small preview being shown in the screen; Support for streaming videos and audios using Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), User Datagram Protocol (UDP) and Real-Time Transfer Protocol (RTP).

All Media Processing functions are based in the Java Media Framework (JMF) version 1.0 [12]. The JMF is an Application Programming Interface (API) which specifies a set of methods and use cases to synchronize, control, process and presents audio, video, subtitles and other time-based data structures. In the GingaCC, the Media Processing interacts directly with other two components, the Demux and the Graphics, described as follow:

& &

Demux—Component responsible to separate the various streams present in the Transport Stream and to provide it to the correct component to be processed. The Media Processing receives audio, video and subtitles streams from Demux. Graphics—Component responsible to show the video and subtitles in the display. Also, it has a module to handle the decoded audio stream. The Graphics component receives the decoded stream from Media Processing and processes it with a built-in video and audio output driver.

Figure 1 illustrates the connections among the Media Processing and the components directly connected to it, the Demux and the Graphics components.

Fig. 1 Interfaces and connections between the Media Processing, Demux and Graphics components

376

Multimed Tools Appl (2012) 57:373–392

In order to explore the Media Processing component efficiency, we have developed two implementations of this component. The first one is implemented with libVLC library and offers all functionalities described above. For testing purpose, the second implementation uses the Xine backend library and an optimized X.Org module to show the resulting decoded video and only provides the methods to open, decode and play an input video and audio streams. Both implementations use the Advanced Linux Sound Architecture (ALSA) [21] to reproduce the audio stream. 2.1 libVLC library The libVLC library 1.1.0 version was used in the implementation of the Media Processing. LibVLC is a graphic library developed by VideoLAN under GNU General Public License (GPL) version 2. The choice for this library to implement the Media Processing component is due to its large list of features, like:

& & & & &

Compatibility with various media formats, including the standard H.264/AVC of the SBTVD and the AAC audio standard; Portability to a wide range of operating systems, like Microsoft Windows, GNU Linux, Mac OS, BeOS and FreeBSD; Support for several video outputs, like DirectX, OpenGL, X11, Xvideo, SDL and Frame Buffer; The library itself is written in C language, offering a high performance to process a media. The library was, also, translated to other languages, like Java; Provide an extensive API, providing demux, video display and exception control.

In the highest abstraction level, there are two complementary classes, libVLC_Media_Player and libVLC_Media. The main functions of these classes are explained as follow:

& &

libVLC_Media_Player—provides methods to control the reproduction and to return information from the stream flow. There are also some other functions, like append subtitles to stream; libVLC_Media—provides low-level methods that allow controlling the media, like the descriptor duplication, calculus of length for the media being used and various meta information. It can be separated in libVLC_video and libVLC_audio, the lowest level class which controls video and audio, respectively.

A high-level description of the Media Processing, using the libVLC_Media_Player and libVLC_Media classes, implemented in the libVLC Media Processing are shown in Fig. 2.

Fig. 2 High-level description of the Media Processing implemented with libVLC library

Multimed Tools Appl (2012) 57:373–392

377

2.2 Xine library Xine library, also known as xine-lib, is a backend library which provides audio/video demux and decoding. It was developed by Xine project under GNU General Public License (GPL) version 2 as a multi-thread media player, originally designed to provide an easy graphical DVD reproduction. Xine is a powerful library designed to be simple and architecture independent, leaving the front-end details to other modules. The main features of the Xine-lib are listed below:

& & & &

Native support for a large set of video and audio formats, like the H.264/AVC video codec and AAC audio codec, standards of SBTVD; Portability to all Unix-like operating systems and also to Microsoft Windows; Support to several video drivers, like XVideo, XShm, OpenGL, X11, Xvideo, SDL, Frame Buffer and pgx64; The library core has developed in C language. Applications in other languages can use Xine-lib through dynamic library.

The Media Processing described in this work uses a basic and optimized X.Org module created for testing purpose. The Fig. 3 illustrates some of its methods, the input stream and the X.Org module. The version of Xine library used in the implementation of the Media Processing was the 1.1.16.3 and the X.Org version used was the 7.5. 2.3 FlexCM component model The GingaCDN components are been developed using the FlexCM component model [7]. Each FlexCM component must specify the required interfaces and, also, the interfaces provided to other components. The responsibility for connecting the components is done by FlexCM during the execution time. For each component implementation one should also specify two archives, Architecture and Registry. The first one describes the data for execution, like the path to the dynamic library of each component and a unique identification for the component. The Registry file specifies which connections are used by the component, using the unique identification numbers defined in the component implementation. This methodology aids the distributed development and also guarantees an easy integration process. The FlexCM version used in Media Processing implementations was the 0.2.

Fig. 3 High-level description of the Media Processing implemented with Xine library

378

Multimed Tools Appl (2012) 57:373–392

3 Experimental results 3.1 Methodology The experiments were performed in two different computer architectures, a desktop named “Computer A” and a notebook named “Computer B”. The Computer A is an Intel Core 2 Duo E6320 1.86 GHz processor, with 2 Gigabytes (GB) of RAM, running the Ubuntu 9.10 operating system. Computer B uses an Intel Core 2 Solo ULV SU3500 1.40 GHz processor, with 3 GB of RAM and running the Ubuntu 9.10 operating system. The experimental data were obtained through analyses of the processes, using the Procps application to capture these data. Each benchmark video was executed during 3 min, being repeated three times. The memory cost and the processor usage of the Media Processing were captured each second. The performance tests while running only the video stream are presented in subsection 3.2.1. Subsection 3.2.2 discusses the performance tests while running both video and audio streams. A total of 17280 samples were analyzed, providing a consistent performance analysis of the Media Processing in real use. The Media Processing components were compiled using the GNU gcc compiler without any optimization. The libVLC and the Xine library were also compiled with default settings to provide an architecture independent evaluation. The benchmark set consists of four progressive videos in three different resolution, the 848×480 (480p) known as Standard Definition (SD), the 1280×720 (720p) and the 1920× 1080 (1080p) known as High Definition (HD). The STS116 video was obtained from [17], the Taxi3 French, named Taxi, was obtained from [23] and the Saguaro National Park, named Park, as well as the Space Alone, named Space, were available in [1]. The video details are presented in Table 1 (480p videos), Table 2 (720p videos), and Table 3 (1080p video). All videos present 16:9 aspect ratio, MP4 container, and 30 FPS. The videos were codified from the 1080p video sources, using the ×264 library version 1376 [24]. The H264 High profile and AVC at level 5.1 were used with a constant bit rate for each resolution. The FPS of all videos was converted to 30 FPS. Some of the important coding details are listed below. They are all default to High Profile of the×264 encoder at AVC 5.1 level:

& & & & & & & &

Context-adaptive binary arithmetic coding (CABAC) enabled; Deblocking enabled, strength and threshold are 0; Motion Estimation (ME) hexagonal algorithm with range 16, using Chroma M.E and RDO on all I/P frames for Sub pixel Refinement; Macro blocks with adaptive Discrete Cosine Transform (DCT) using I4×4, I8×8, P8×8 and B8×8; Trellis disabled; Three B-Frames with bias equals to 0. Fast search for adaptive B-Frames and B-Pyramid disabled; Three reference frames with Adaptive I-Frame Decision enabled were used; Colorimetry at 4:2:0.

Table 1 Details for the 480p video set

Video Name

Size (MB)

Duration (mm:ss)

Video Bitrate (Kbps)

Park

103

5:20

2500

Space

60

3:06

2500

STS116

67

3:31

2500

Taxi

52.2

2:42

2500

Multimed Tools Appl (2012) 57:373–392 Table 2 Details for the 720p video set

Table 3 Details for the 1080p video set

379

Video Name

Size (MB)

Duration (mm:ss)

Video Bitrate (Kbps)

Park

198

5:20

5000

Space STS116

115 129

3:06 3:31

5000 5000

Taxi

100

2:42

5000

Video Name

Size (MB)

Duration (mm:ss)

Video Bitrate (Kbps)

Park Space

390 226

5:20 3:06

10000 10000

STS116

253

3:31

10000

Taxi

196

2:42

10000

The audio stream was encoded using the Nero AAC Codec [18] version 1.3.3.0, using the Low Complexity (LC) profile version 4, used in the SBTVD standards for the set-top box implementation. The bit rate was fixed in 192 Kbps, and two audio channels (Stereo) with 44.1 KHz of sampling rate were used. 3.2 Audio reproduction analysis These experiments show the memory cost and processor usage of the libVLC and Xine implementations of the Media Processing when running only the video stream of the benchmark set.

Fig. 4 Results for processor usage when using Computer A

380

Multimed Tools Appl (2012) 57:373–392

3.2.1 Computer A The first experiment shows the efficiency in terms of CPU usage and memory costs of the Media Processing in a mid-end desktop. The processor, an Intel Core 2 Duo E6320, consumes around 65.5 W [8]. It can be a reasonable consume for a set-top box since it is not a portable or battery-based device. However, this consumption can be unacceptable for most of the embedded systems, like cell phones and other mobile systems. The processor usage, in percentage, achieved for both Xine and libVLC implementations of Media Processing using benchmark with different resolutions and running only the video stream in Computer A are shown in Fig. 4. Memory cost, in Megabytes, of the libVLC and Xine implementations of the Media Processing running the video stream are presented in Fig. 5. The processor usage variation between libVLC and Xine implementations of Media Processing for 480p videos was 0.30%, for 720p videos was 1.14%, and for 1080p videos was 1.81%. The memory cost variation between libVLC and Xine implementations of Media Processing for 480p videos was 67.12%, for 720p videos was 48.42%, and for 1080p videos was 34.26%. 3.2.2 Computer B The main goal to perform experiments in Computer B was to evaluate the efficiency of the two implemented components in a personal computer with a low power consumption processor. The Intel Core 2 Solo ULV SU3500 is based on a single core architecture running at 1.4 GHz, with nominal power consumption around 5.5 W [9]. This low consume is a fundamental characteristic to embedded system and mobile devices.

Fig. 5 Results for memory cost when using Computer A

Multimed Tools Appl (2012) 57:373–392

381

Fig. 6 Results for processor usage when using Computer B

The processor usage, in percentage, achieved for both Xine and libVLC implementations of Media Processing using benchmark with different resolutions and running only the video stream in Computer B are shown in Fig. 6. Memory cost, in Megabytes, of the libVLC and Xine implementations of the Media Processing running the video stream are presented in Fig. 7.

Fig. 7 Results for memory cost when using Computer B

382

Multimed Tools Appl (2012) 57:373–392

The processor usage variation between libVLC and Xine implementations of the Media Processing for 480p videos was 1.19%, for 720p videos was 1.53%, and for 1080p videos was 2.48%. The memory cost variation between libVLC and Xine implementations of Media Processing for 480p videos was 28.98%, for 720p videos was 20.86%, and for 1080p videos was 18.48%. 3.3 Video and audio reproduction analysis These experiments show the memory cost and processor usage of the libVLC and Xine implementations of the Media Processing when running both the video and audio streams of the benchmark set. 3.3.1 Computer A The processor usage, in percentage, achieved for both Xine and libVLC implementations of Media Processing using benchmark with different resolutions and running in Computer A are shown in Fig. 8. Memory cost, in Megabytes, of the libVLC and Xine implementations of the Media Processing running the video and audio streams are presented in Fig. 9. The processor usage variation between libVLC and Xine implementations of Media Processing for 480p videos was 6.95%, for 720p videos was 9.84%, and for 1080p videos was 0.57%. The memory cost variation between libVLC and Xine implementations of Media Processing for 480p videos was 68.21%, for 720p videos was 47.60% and for 1080p videos was 34.64%.

Fig. 8 Results for processor usage when using Computer A

Multimed Tools Appl (2012) 57:373–392

383

Fig. 9 Results for memory cost when using Computer A

3.3.2 Computer B The processor usage, shown in percentage, for both Xine and libVLC implementations of Media Processing using benchmark with different resolutions and running the audio and videos streams in Computer B are shown in Fig. 10. Memory cost, in Megabytes, of the

Fig. 10 Results for processor usage when using Computer B

384

Multimed Tools Appl (2012) 57:373–392

Fig. 11 Results for processor usage when using Computer B

libVLC and Xine implementations of the Media Processing running both streams are presented in Fig. 11. The processor usage variation between libVLC and Xine implementations of Media Processing for 480p videos was 8.99%, for 720p videos was 6.04%, and for 1080p videos was 3.43%. The memory cost variation between libVLC and Xine implementations of

Fig. 12 Average processor usage of the Media Processing implementations when reproducing the video stream in Computer A

Multimed Tools Appl (2012) 57:373–392

385

Fig. 13 Average memory cost of the Media Processing implementations when reproducing the video stream in Computer A

Media Processing for 480p videos was 29.91%, for 720p videos was 21.11%, and for 1080p videos was 23.02%. 3.4 General results 3.4.1 Video stream reproduction The following results consider the average values for all video resolutions, obtained from both implementations when analyzing video stream. Figure 12 presents the average processor usage for the libVLC and Xine implementations, in percentage, obtained using Computer A while reproducing the video

Fig. 14 Average processor usage of the Media Processing implementations when reproducing the video stream in Computer B

386

Multimed Tools Appl (2012) 57:373–392

Fig. 15 Average memory cost of the Media Processing implementations when reproducing the video stream in Computer B

stream. In Computer A, the variation of the average processor usage, in percentage, was 1.34%. The average memory cost with libVLC and Xine implementations obtained using Computer A is shown, in Megabytes, in Fig. 13. In Computer A, the variation of the average memory cost was 45.49%. Figure 14 presents the average processor usage for the libVLC and Xine implementations, in percentage, obtained using Computer B while reproducing the video stream. In Computer B, the variation of the average processor usage, in percentage, was 1.40%. The average memory cost with libVLC and Xine implementations obtained using Computer B is shown, in Megabytes, in Fig. 15. In Computer B, the variation of the average memory cost was 21.36%.

Fig. 16 Average processor usage of the Media Processing implementations when reproducing both the video and audio streams in Computer A

Multimed Tools Appl (2012) 57:373–392

387

Fig. 17 Average memory cost of the Media Processing implementations when reproducing both the video and audio streams in Computer A

3.4.2 Video and audio stream reproduction The following results consider the average values for all video resolutions, obtained from both implementations, when considering both video and audio streams. Figure 16 presents the average processor usage for the libVLC and Xine implementations, in percentage, obtained using Computer A while reproducing both the video and audio streams. In Computer A, the variation of the average processor usage, in percentage, was 3.88%. The average memory cost with libVLC and Xine implementations obtained using Computer A is shown, in Megabytes, in Fig. 17. In Computer A, the variation of the average memory cost was 45.68%.

Fig. 18 Average processor usage of the Media Processing implementations when reproducing both the video and audio streams in Computer B

388

Multimed Tools Appl (2012) 57:373–392

Figure 18 presents the average processor usage for the libVLC and Xine implementations, in percentage, obtained using Computer B while reproducing both the audio and video streams. In Computer B, the variation of the average processor usage, in percentage, was 5.48%. The average memory cost with libVLC and Xine implementations obtained using Computer B is shown, in Megabytes, in Fig. 19. In Computer B, the variation of the average memory cost was 23.87%.

4 Conclusions This work presented two implementations of the Media Processing for GingaCC of the GingaCDN, using the libVLC and the Xine libraries. Experiments were performed to evaluate the behavior in terms of processor usage and memory cost of the two components when running in two different architectures. The libVLC implementation of the Media Processing presented an average processor usage of 64.14%, considering both computer architectures when running only video and also audio and video streams. The average memory cost of Media Processing, in this case, was 72.27 MB. On the other hand, the Xine implementation of the Media Processing presented 66.08% of average processor usage and 54.25 MB of average memory cost when running in the same environment. The experimental results show a better performance to the Media Processing implemented with the libVLC library when considering CPU usage. The processor usage variation between both implementations was 3.02%. However, in terms of memory cost the implementation using Xine library presented a superior performance, with 33.21% of variation between the libVLC and Xine library. The results showed a performance evaluation with libVLC and Xine library implementations of the component Media Processing using default settings and without

Fig. 19 Average memory cost of the Media Processing implementations when reproducing both the video and audio streams in Computer B

Multimed Tools Appl (2012) 57:373–392

389

any possible and architecture dependent optimization, like SSE, SSE2, FPU, MMX, MMX EXT, and 3D Now optimizations. Although both implementations of the component are similar, the largest memory consumption observed in the implementation of the Media Processing using the libVLC library is due to the fact that the library itself include native X11, Frame Buffer and others video output modules. There are also modules for receiving media via the Internet through various protocols and modules to process and render subtitles in various formats like SSA. In Xine library, these components must be implemented by the programmer, causing a reduction in the size of the library and, consequently, the memory consumption. The implementation of the Media Processing using Xine showed a slight increase in processor consumption due to the method used to render the video display, having been necessary to adapt the module X.Org to accept native output of Xine library, causing a bottleneck and increasing the processor usage. Regarding future works, we intend to perform tests using a wider set of videos with other H.264 profiles, using some of the optimizations available to the Media Processing in both libraries, like the SSE instructions set. Also, it will be performed the integration of the Media Processing with other components of the GingaCDN project, resulting in the integrated and modular reference implementation of the Ginga middleware. In addition, a Media Processing implementation using the Compute Unified Device Architecture (CUDA) developed by Nvidia will be performed. This implementation will feature a complete optimized self-made decoding procedure, being able to handle high profile H.264/AVC videos even in 1080p, 1600p or 2048p. This information was added to the paper in the conclusion section where we talk about further research.

References 1. Adobe Flash HD Gallery (2008) http://www.adobe.com/products/hdvideo/hdgallery. Accessed 15 August 2009 2. Decree 4901 (2003). http://www.planalto.gov.br/ccivil_03/decreto/2003/d4901.htm. Accessed 20 September 2009 3. Decree 5820 (2006) http://www.planalto.gov.br/ccivil_03/_Ato2004-2006/2006/Decreto/D5820.htm. Accessed 20 de July 2009 4. Digital Video Broadcasting Project Office (2009) GEM Specification List. http://www.mhp.org/specs/ a139_GEM_122_r4.pdf. Accessed 20 de July 2009 5. Ferreira C (2001) Maestro: A middleware to supports distributed applications based in software components. Thesis, Federal University of São Paulo 6. Filho G, Leite L, Batista C (2007) GingaJ: The Procedural Middleware for the Brazilian Digital TV System. J Braz Comput Soc 2:47–56 7. Filho S et al. (2007) FLEXCM—A Component Model for Adaptive Embedded Systems. IEEE International Computer Software and Applications Conference 119–126. 8. Intel Core 2 Duo Processor E6320 (2010) http://ark.intel.com/Product.aspx?id=29754. Accessed 23 September 2009 9. Intel Core 2 Solo Processor ULV SU3500 (2010) http://ark.intel.com/Product.aspx?id=37133. Accessed 11 September 2009 10. ITU-T AAP Recommendation H.761 (2008). http://www.itu.int/itu-t/aap/AAPRecDetails.aspx? AAPSeqNo=1894. Accessed 22 July 2009 11. Java DTV API 1.0 (2010) http://java.sun.com/javame/technology/javatv. Accessed 20 de August 2009 12. JMF 1.0 Programmers guide (2010) http://java.sun.com/javase/technologies/desktop/media/jmf/1.0/guide. Accessed 17 July 2009

390

Multimed Tools Appl (2012) 57:373–392

13. Leite L et al (2005) FlexTV, a proposal for a middleware architecture for Brazilian Digital TV System. J Comput Eng Digit Syst 2:30–49 14. LibVLC (2010) http://wiki.videolan.org/Libvlc. Accessed 23 July 2009 15. Middleware GingaJ (2009) http://gingacdn.lavid.ufpb.br/projects/ginga-j. Accessed 12 de September 2009 16. Moreno M (2006) A declarative middleware for Interactive Digital TV Systems. Thesis, Pontifical Catholic University of Rio de Janeiro 17. NASA High Definition Video (2010) http://www.nasa.gov/multimedia/hd. Accessed 23 June 2009 18. Nero AAC Codec (2010) http://www.nero.com/eng/technologies-aac-codec.html. Accessed 23 April 2009 19. SET (2000) A comparative study of Digital TV standards. http://www.set.com.br/artigos/testing.pdf. Accessed 20 July 2009 20. Soares L, Rodrigues R, Moreno M (2007) GingaNCL: the declarative environment of the Brazilian Digital TV System. J Braz Comput Soc 1:37–46 21. The ALSA Project (2008) http://www.alsa-project.org. Accessed 12 July 2009 22. The Xine project (2010) http://www.xine-project.org. Accessed 23 July 2009 23. WMV HD Content Showcase (2010) http://www.microsoft.com/windows/windowsmedia/musicandvideo/ hdvideo/contentshowcase.aspx. Accessed 23 May 2009 24. X264 (2010) http://www.videolan.org/developers/x264.html. Accessed 06 July 2009

Tiago Henrique Trojahn is Computer Science undergraduate student at Federal University of Pelotas. He works in Architectures and Integrated Circuits Research Group (GACI) and his research interests include Digital TV, middleware, video decoding, applications for Digital TV.

Multimed Tools Appl (2012) 57:373–392

391

Juliano Lucas Gonçalves received the B.S. degree in Informatics from State University of Ponta Grossa, in 2002 and the M.S. degree from Federal University of Paraná, in 2005. He is currently a research assistant in the Informatics Department at the Federal University of Pelotas, Pelotas, Brazil. He works in Architectures and Integrated Circuits Research Group (GACI) and his research interests include Digital TV, middleware, video decoding, applications for Digital TV.

Júlio Carlos Balzano de Mattos received the B.S. degree in Computer Science from Federal University of Pelotas, Pelotas, Brazil, in 1997 and the M.S. and Ph.D. degrees from Federal University of Rio Grande do Sul, Porto Alegre, Brazil, in 2000 and 2007 respectively. He is currently an Associate Professor in the Informatics Department at the Federal University of Pelotas. His research interests center on embedded systems, with an emphasis in embedded software and embedded architectures. He is a member of the IEEE and the IEEE Circuits and System Society and also Brazilian Computer Society (SBC) and of the Brazilian Microelectronics Society (SBMicro).

392

Multimed Tools Appl (2012) 57:373–392

Luciano Volcan Agostini received the B.S. degree in Computer Science from Federal University of Pelotas, Pelotas, Brazil, in 1998 and the M.S. and Ph.D. degrees from Federal University of Rio Grande do Sul, Porto Alegre, Brazil, in 2002 and 2007 respectively. He is Professor since 2002 at the Informatics Department at Federal University of Pelotas, where he leads the Architectures and Integrated Circuits Research Group (GACI). He has more than 100 published papers in journals and conference proceedings. His research interests include video coding algorithms, FPGA based design and microelectronics. Dr. Agostini is a Gold Member of IEEE and he is a member of the IEEE Circuits and System Society, of the Brazilian Computer Society (SBC) and of the Brazilian Microelectronics Society (SBMicro). He participated on several organizing and program committees of international events in the area of circuits and systems design.

Leomar Soares da Rosa Junior received the B.S. degree in Computer Science from Federal University of Pelotas, Pelotas, Brazil, in 2001 and the M.S. and Ph.D. degrees from Federal University of Rio Grande do Sul, Porto Alegre, Brazil, in 2003 and 2008 respectively. He is currently an Associate Professor in the Informatics Department at the Federal University of Pelotas. His research interests center on logic synthesis, CAD tools, Digital TV, middleware and applications for Digital TV. He is a member of the ACM and also Brazilian Computer Society (SBC) and of the Brazilian Microelectronics Society (SBMicro).

Suggest Documents