Effect of error concealment methods on UMTS radio ...

11 downloads 7268 Views 179KB Size Report
provide a wide range of services including real-time video with acceptable quality. ..... synchronization, data recovery and error concealment. The. NEWPRED ...
(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 2, February 2010

128

Effect of error concealment methods on UMTS radio network capacity and coverage planning Dr. Bhumin H. Pathak1, Dr. Geoff Childs2 and Dr. Maaruf Ali3 1

Airvana Inc., Chelmsford, USA, [email protected]

2

School of Technology at Oxford Brooks University, Oxford, UK, [email protected] 3

School of Technology at Oxford Brooks University, Oxford, UK, [email protected]

Abstract: The radio spectrum is a precious resource and careful utilization of it requires optimization of all the processes involved in data delivery. It is understood that achieving maximum coverage and capacity with the required QoS is of the utmost importance for network operators to maximize revenue generation. Whilst current methods of video compression accelerate transmission by reducing the number of bits to be transmitted over the network, they have the unfortunate tradeoff of increasing signal sensitivity to transmission errors. In this paper we present an approach to transmit MPEG-4 coded video in a way that optimizes the radio resources while maintaining the required received video quality. This approach thereby provides increased capacity for network operators. Two different methods used for this purpose are selective retransmission of erroneous parts in frames and dynamic changes in the reference frames for predictive coding. Network operators can still provide the required video quality with implementation of these two methods despite the low signal-to-interference ratios and can therefore increase the number of users in the network. We show here with the help of simulation results the performance enhancements these methods can provide, for various channel propagation environments. Comparison is performed using a standard PSNR metric and also using the VQMp metric for subjective evaluation.

Keywords: MPEG-4, UMTS, NEWPRED, UEP, network coverage and optimization.

1. Introduction Future wireless mobile communication is expected to provide a wide range of services including real-time video with acceptable quality. Unlike wired networks, wireless networks struggle to provide high bandwidth for such services which therefore require highly compressed video transmission. With high compression comes high sensitivity to transmission errors. Highly compressed video bitstreams like MPEG-4 Error! Reference source not found. can lose a considerable amount of information with the introduction of just a few errors. In this context, it is important to have a video codec with a repertoire of efficient error-resilience tools. Transmission errors in a mobile environment can vary from single bit errors to large burst errors or even intermittent loss of the connection. The widely varying nature of the wireless channel limits the use of classical forward error correction (FEC) methods which may require a large amount of redundant data to overcome bursty errors. In this case as an alternative to FEC, an optimized

automatic repeat request (ARQ) scheme with selective repeat and dynamic frame referencing provides better performance. Correcting or compensating for errors in a compressed video stream is complicated by the fact that bit-energy concentration in the compressed video is not uniformly distributed, especially with most motion-compensated interframe predictive codecs like MPEG-4. The encoded bitstream has a high degree of hierarchical structure and dependencies which impact on error correction techniques. The problem is compounded by the fact that it is also not feasible to apply a generalized error-resilience scheme to the video stream as this may impact on standardized parameters in the different layers of the UMTS protocol stack. For example when overhead or redundancy is added to the existing video standard for FEC implementation, the modified video stream can become incompatible with the standard and the subsequent data format may not be decoded by all standard video decoders. It is therefore important that any overhead or redundancy be added in a way which does not make the modified video bit-stream incompatible with the standard. With reference to the above mentioned constraints it is vital to design an error-protection mechanism that exploits the hierarchical structure of the inter-frame video compression algorithm, is compatible with the standardized wireless communication system, and preserves the integration of the standard bitstream.. In this paper we present one such scheme which uses unequal error protection (UEP) in conjunction with ARQ to exploit different modes at the radio link layer (RLC) Error! Reference source not found. of the UMTS architecture. This scheme is coupled with a dynamic frame referencing (NEWPRED) Error! Reference source not found. scheme which exploits frame dependencies and interframe intervals. The techniques discussed in this paper are relevant and applicable to a wide variety of interframe video coding schemes. As an illustrative example, the video compression standard, MPEG-4, is used throughout. The rest of the paper is organized into the following sections. Section-2 gives an introduction to the general principles of the predictive interframe MPEG-4 codec, the frame hierarchies and dependencies involved. Section-3

(IJCNS) International Journal of Computer and Network Security, 129 Vol. 2, No. 2, February 2010

discusses the general UMTS network and protocol architecture. Section-4 classifies transmission errors into different classes and introduces the UEP and NEWPRED protection schemes.. Section-5 details the UEP scheme while Section-6 details NEWPRED. In Section-7 the simulation scenarios and results are presented. All the achieved results are then discussed from the network operators’ point of view in Section-8. Finally the conclusion is drawn at the end in Section-9.

2. Motion-compensated predictive coding 2.1 General Principles Image and video data compression refers to a process in which the amount of data used to represent images and video is reduced to meet a bit rate requirement, while the quality of the reconstructed image or video satisfies a requirement for a certain application. This needs to be undertaken while ensuring the complexity of computation involved is affordable for the application and end-devices. The statistical analysis of video signals indicates that there is a strong correlation both between successive picture frames and within the picture elements themselves. Theoretically decorrelation of these signals can lead to bandwidth compression without significantly affecting image or video resolution. Moreover, the insensitivity of the human visual system to loss of certain spatio-temporal visual information can be exploited for further bitrate reduction. Hence, subjectively lossy compression techniques can be used to reduce video bitrates while maintaining an acceptable video quality. Figure 1 shows the general block diagram of a generic interframe video codec Error! Reference source not found..

Figure 1. Generic interframe video codec In interframe predictive coding the difference between pixels in the current frame and their prediction values from the previous frame are coded and transmitted. At the receiving end after decoding the error signal of each pixel, it is added to a similar prediction value to reconstruct the picture. The better the predictor, the smaller the error signal, and hence the transmission bit rate. If the video scene is static, a good prediction for the current pixel is the same pixel in the previous frame. However, when there is a motion, assuming the movement in the picture is only a

shift of object position, then a pixel in the previous frame, displaced by a motion vector, is used. Assigning a motion vector to each pixel is very costly. Instead, a group of pixels are motion compensated, such that the motion vector overhead per pixel can be very small. In a standard codec a block of 16 × 16 pixels, known as a Macroblock (MB), are motion estimated and compensated. It should be noted that motion estimation is only carried out on the luminance parts of the pictures. A scaled version of the same motion vector is used for compensation of chrominance blocks, depending on the picture format. Every MB is either interframe or intraframe coded. The decision on the type of MB depends on the coding technique. Every MB is divided into 8 × 8 luminance and chrominance pixel blocks. Each block is then transformed via the DCT – Discrete Cosine Transform. There are four luminance blocks in each MB, but the number of chrominance blocks depends on the color resolution. Then after quantization, variable length coding (entropy coding) is applied before the actual channel transmission.

2.2 Frame types An entire video frame sequence is divided into a Group of Pictures (GOP) to assist random access into the frame sequence and to add better error-resilience. The first coded frame in the group is an I-frame. It is followed by an arrangement for P and B frames. The GOP length is normally defined as the distance between two consecutive Iframes as shown in Figure-2. The I-frame (Intra-coded) frame is coded using information only from itself. A Predictive-coded (P) frame is coded using motion compensated prediction from a past reference frame(s). While a Bidirectionally predictive-coded (B) frame is a frame which is coded using motion and texture compensated prediction from a past and future reference frames.

Figure 2. Frame structures in a generic interframe video codec. It is important to note here that the frame interval between two consecutive I-frames and between two consecutive Pframes has a significant effect on the received video quality as well as on the transmission bitrate. It is actually a tradeoff between error-resilience capabilities and the required operational bitrate. A major disadvantage of this coding scheme is that transmission errors occurring in a frame which is used as a reference frame for other P or B frames causes errors to

130

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 2, February 2010

propagate into the following video sequence. This propagation continues until an intra-refresh is applied. In the presented work this hierarchy of frame types is exploited. It should be obvious from the above discussion that the I-frame which acts as a reference frame for the entire GOP needs to be protected the most from transmission errors with perhaps weaker protection for Pframes. Since no other frame depends on B-frames, errors occurring in B-frames affect just a single frame and do not propagate into the video sequence.

Physical layer (L1) Data Link layer (L2) Network layer (L3) The data link layer is split further into the Medium Access Control (MAC) Error! Reference source not found., Radio Link Control (RLC) Error! Reference source not found., Packet Data Convergence Protocol (PDCP) Error! Reference source not found. and Broadcast/Multicast Control (BMC) Error! Reference source not found.. Layer 3 and RLC are divided into Control (C) and User (U) planes as shown in Figure-3. U-plane information

C-plane signalling

L3 RRC

control

Radio Bearers

control

control

BMC

RLC RLC

L2/RLC

RLC

RLC

RLC Logical channels

L2/MAC Transport channels

3. UMTS network and protocol architecture

UMTS is a very complex communication system with a wide range of protocols working at different layers of its network architecture Error! Reference source not found.. UMTS has been designed to support a wide range of applications with different Quality of Service (QoS) profiles Error! Reference source not found.. It is essential to understand the overall architecture of the system before we consider details of transmission quality for a given application. UMTS can be briefly divided into two major functional groupings: Access Stratum (AS) and Non-Access Stratum (NAS). The AS is the functional grouping of protocols specific to the access techniques while NAS aims at different aspects for different types of network connections. The Radio Access Bearer (RAB) is a service provided by the AS to the NAS in order to transfer data between the user equipment (UE) and core network (CN). It uses different radio interface protocols at the Uu interface, those are layered into three major parts as shown in Figure 3.

L2/BMC

RLC

RLC RLC

MAC

3.1 Brief introduction

L2/PDCP

PDCP PDCP

control control

As an illustrative example, the video compression standard, MPEG-4, is used throughout. The rest of the paper is organized into the following sections. Section-2 gives introduction of general principles of predictive interframe MPEG-4 codec, frame hierarchies and dependencies involved. Section-3 discusses the general UMTS network and protocol architecture. Section-4 classifies transmission errors into different classes and introduces the UEP and NEWPRED protection schemes. Section-5 introduces UEP while Section-6 introduces NEWPRED. In Section-7 error detection method is described. Section-8 discusses video quality measurement techniques used in the paper. Section9 provides information on header compression protocol used in UMTS architecture. Section-10 introduces various radio propagation environments used in simulations. In section11 simulation scenarios and results are presented. All the achieved results are then discussed from the network operators’ point of view in Section-12. Finally the conclusion is drawn at the end in Section-13.

The radio interface protocols are needed to set-up, reconfigure and release the RAB. The radio interface is divided into three protocol layers:

PHY

L1

Figure 3. Radio interface protocol structure for UMTS The service access point (SAP) between the MAC and physical layers are provided by the transport channels (TrcCHs). While the SAP between RLC and MAC sublayers are provided by the logical channels (LcCHs). We will explain the RLC in further details as the services provided by different RLC modes are exploited in order to provide different levels of UEP for video transmission.

3.2 RLC modes Different services provided by RLC sub-layers includes, Transparent Mode (TM) data transfer, Unacknowledged Mode (UM) data transfer, Acknowledged Mode (AM) data transfer, maintenance of QoS as defined by upper layers and notification of unrecoverable error Error! Reference source not found.. TM data transfer transmits upper layer PDUs without adding any protocol information, possibly including segmentation/reassembly functionality. It ignores any errors in received PDUs and just passes them onto the upper layer for further processing. UM data transfer transmits upper layer PDUs without guaranteeing delivery to the peer entity. PDUs which are received with errors are discarded without

(IJCNS) International Journal of Computer and Network Security, 131 Vol. 2, No. 2, February 2010

any notice to the upper layer or to the peer entity. AM data transfer transmits upper layer PDUs with guaranteed delivery to the peer entity. Error-free delivery is ensured by means of retransmission. For this service, both in-sequence and out-of-sequence delivery are supported. As mentioned before, in the presented scheme unequal error protection is applied to the different types of video frames transmitted. The I-frame which has the most crucial data is always transmitted using the AM data transfer mode of service to ensure error-free delivery.

4. Error classification From the discussion about the interframe predictive coding hierarchical structure of MPEG-4 it is clear that various parts of the bit stream have different levels of importance. We have classified errors into three main classes based on their impact on the coded hierarchy:

robust video communication system. Errors introduced during transmission can lead to frame mismatch between the encoder and the decoder, which can persist until the next intra refresh occurs. Where an upstream data channel exists from the decoder to the encoder, NEWPRED or demand intra refresh can be used. NEWPRED is a technique in which the reference frame for interframe coding is replaced adaptively according to the upstream messaging from the decoder. NEWPRED uses upstream messages to indicate which segments are erroneously decoded. On receipt of this upstream message the encoder will subsequently use only the correctly decoded part of the prediction in an inter-frame coding scheme. This prevents temporal error propagation without the insertion of intra coded MBs (Macro Blocks) and improves the video quality in noisy multipath environments. The following section explains the concept in more detail.

• Most critical errors – Class-A errors • Less critical errors – Class-B errors • Least critical errors – Class-C errors Class-A errors contain errors which can significantly degrade the received video quality. This class includes errors introduced in the video sequence header, I-frame headers and I-frame data. Class-B, includes errors in Pframe headers and P-frame data while Class-C includes errors in B-frame headers and B-frame data. In the following sections we discuss two different methods to deal with Class-A and Class-B errors separately as described

4.1 Unequal Error Protection (UEP) for Class-A errors As mentioned before, the I-frame has relatively more importance than the P and B frames. Hence an I-frame data needs to be transmitted with higher protection using the proposed UEP scheme. This is achieved by using a different mode of transmission on the RLC layer of UMTS. The Iframes are transmitted using AM while other frames are transmitted using TM. If any error is introduced during radio transmission, the specific PDU of the I-frame is retransmitted using AM. This guaranties error-free delivery of the I-frames.

4.2 NEWPRED for Class-B errors The MPEG-4 ISO/IEC 14496 (Part-2) Error! Reference source not found. standard provides error robustness and resilience capabilities to allow accessing of image or video information over a wide range of storage and transmission media. The error resilience tools developed for this part of ISO/IEC 14496 can be divided into three major categories: synchronization, data recovery and error concealment. The NEWPRED feature falls into the category of error concealment procedures. Recovery from temporal error propagation is an indispensable component of any error

Figure 4. Error propagation due to interframe decoding dependencies When a raw video sequence is encoded utilizing MPEG-4, each of the raw video frames is categorized according to the way in which predictive encoding references are used. An Intra-coded (I) frame is coded using information only from itself. A Predictive-coded (P) frame is coded using motion compensated prediction from a past reference frame(s). While a Bidirectional predictive-coded (B) frame is a frame which is coded using motion and texture compensated prediction from a past and future reference frames. A disadvantage of this coding scheme is that transmission errors occurring in a frame which is used as a reference frame for other P or B frames, causes errors to propagate into the video sequence. This propagation continues until an intra-refresh is applied. In the example shown in Figure-4, an error occurred in frame P3 which acts as a reference frame for P4, subsequent P-frames and B-frames (B5, B6 etc), until the next intra-refresh frame (I2) occurs. Where the transmission error has damaged crucial parts of the bit-streams such as a frame header, the decoder may be unable to decode the frame which it then drops. If this dropped frame is a P-frame, none of the frames that are subsequently coded with reference to this dropped P-frame

132

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 2, February 2010

can be decoded. So in effect all subsequent frames until the next intra-refresh are dropped. This situation can seriously degrade the received video quality leading to the occurrence of often long sequences of static or frozen frames. If through the use of an upstream message the encoder is made aware of errors in the particular P-frame (P3), the encoder can change the reference frame for the next P-frame (P4) to the previous one which was received correctly (P2). P-frames and B-frames after P4 then refer to the correctly decoded P4, rather than the faulty P3 frame. The technique therefore reduces error propagation and frame loss occurring from dropped P-frames. This method can significantly improve the performance of the received video quality. To implement the NEWPRED feature, both the encoder and decoder need buffer memories for the reference frames. The required buffer memory depends on the strategy of the reference frame selection by the encoder, and transmission delay between the encoder and decoder. In this paper results for two different schemes are reported.

Figure 5. CRC insertion This method adds an extra 16 bits of overhead to each frame but the performance improvements in video quality with NEWPRED implementation coupled with CRC error detection, justifies this overhead. The discussion in the previous sections described the proposed methods for error-protection for different parts of the video stream. The following section describes the simulation scenarios used to test the implementation of these methods. The entire end-to-end transmission and reception system scenario is developed using different simulation tools ranging from simple C programming to using SPW 4.2 by CoWare and OPNET 10.5A.

5. Error detection To implement NEWPRED it is important to identify errors at the frame level at the decoding end. Mapping of errors identified by the lower layer to the application layer with the precision of a single video frame or video packet often results in a complicated process consuming a considerable amount of processing resource and introduces severe processing delays. Insertion of CRC bits in the standard MPEG-4 bit-stream at frame level provides a simpler solution to this problem. With the insertion of extra bits which are not defined as part of the standard video encoded sequence, would normally result in the production of an incompatible bit-stream which standard decoders will not be able to decode. But as mentioned in Error! Reference source not found. this would not be the case if these bits are inserted at a particular place of the standard MPEG-4 bitstream Error! Reference source not found.. While decoding, the decoder is aware of the total number of macroblocks (MB) in each frame. It starts searching for a new video frame header after decoding these macroblocks. It ignores everything between the last marcoblock of the frame and the next frame header as padding. If generated CRC bits are inserted at this place as shown in Figure-5, after the last macroblock and before the next header, this should preserve the compatibility of the bit-stream with standard MPEG-4. Such insertion of CRC bits does not affect the normal operation of any standard MPEG-4 decoder. Also because the inserted CRC only consists of the 16 bits, generated using the polynomial G16 defined for the MAC layer of the UMTS architecture, it is not possible for it to emulate any start code sequences.

6. Video clips and video quality measurement techniques used For evaluation purposes three standard video test sequences were used, these being: Mother-Daughter, Highway and Foreman. Each of these clips is of 650 frames in length of QCIF (176 × 144) resolution and is encoded with the standard MPEG-4 codec at 10 fps. The objective video quality is measured by the PSNR (Peak Signal to Noise Ratio) as defined by ANSI T1.801.03-1996 Error! Reference source not found.. One more sophisticated model which is developed and tested by ITS (Institute for Telecommunication Sciences) is the Video Quality Metric (VQM) Error! Reference source not found.. VQM has been extensively tested on subjective data sets and has significantly proven correlation with subjective assessment. One of the many models developed as a part of this utility is Peak-Signal-to-Noise-Ratio VQM (VQMP). This model is optimized for use on low-bit rate channels and is also used in this paper for the near subjective analysis of the received video sequence. Typical VQMP is given by:

VQMp =

1

1+ e

0.1701×( PSNR − 25.6675 )

The higher the PSNR value the better is the objective quality whilst the lower the VQMP value the better is the subjective quality of the video sequence.

(IJCNS) International Journal of Computer and Network Security, 133 Vol. 2, No. 2, February 2010

7. Upper layer protocol overheads and PDCP header compression Each lower layer defined in the UMTS protocol stack provides services to the upper layer at defined Service Access Points (SAPs) Error! Reference source not found.. These protocols add a header part to the video frame payload to exchange information with peer entities. Depending upon protocol configurations and the size of the video frames, these headers can be attached to each video frame or multiple video frames can be used as a single payload as defined by RFC-3016 Error! Reference source not found.. As many of the successive headers contain a huge amount of redundant data, header compression is applied in the form of the Packet Data Convergence Protocol (PDCP) Error! Reference source not found.. With PDCP compression, higher layer protocol headers like RTP, UDP and IP headers are compressed into one single PDCP header. In the presented simulation, header attachments, compression and header removal was achieved using the C programming language. Figure-6 shows a typical structure of the video frame payload and header before it is submitted to the RLC layer for further processing.

Once these PDUs are mapped onto different transport channels, they are transmitted using the WCDMA air interface. The physical layer of WCDMA is simulated using the SPW tool by CoWare which models an environment to generate error patterns for various types of channel propagation conditions defined by the 3GPP standards. A 64 kbps downlink data channel and 2.5 kbps control channel were used for this UMTS simulation. These two channels were multiplexed and transmitted over the WCDMA airinterface. The transmission time interval, transmission block size, transmission block set size, CRC attachment, channel coding, rate matching and inter-leaving parameters were configured for both channels compliant with the 3GPP TS 34.108 specification Error! Reference source not found.. The typical parameter set for reference RABs (Radio Access Barriers) and SABs (Signaling Access Barriers) and relevant combinations of them are presented in this standard. The different channel propagation conditions used in the simulation were static, multi-path fading, moving and birthdeath propagation. These channel conditions are described in some details in the following section.

8. Propagation conditions

Figure 6. PDCP header compression Once the PDCP compression is achieved this packet is then submitted to the RLC layer for further processing. The RLC layer can be configured in any one of the transparent, unacknowledged or acknowledged modes of transmission. The RLC then submits the PDU to the lower layers, where the MAC layer and physical layer procedures are applied as appropriate. In the presented simulation, each video frame coming from the application layer is mapped to the RTP layer. Each frame is mapped into a separate RTP packet after addition of the above mentioned RTP, UDP, IP headers with PDCP compression, and if required, RLC headers. For TM RLC service, the RLC SDU cannot be larger than the RLC PDU size. For this reason, if the video frame size is larger than the RLC payload size for TM, then the frame is fragmented into different RTP packets. Other protocol headers are then added to these packets separately. Each RLC SDU is then either of the same size as the RLC PDU or smaller. For AM RLC service, an entire frame can be considered as one RLC SDU regardless of the size. For this reason protocol headers are added to each video frame. This RLC SDU is then fragmented into different RLC PDUs at the RLC layer.

Four different standardized propagation conditions – static, multi-path fading, moving and birth-death were used to generate different error patterns. The typical parameter sets for conformance testing as mentioned in 3GPP TS 25.101 Error! Reference source not found. is used for the radio interface configuration. A common set of parameters for all kinds of environment is listed in Table 1, while any specific parameters to the environment are mentioned in the respective sections. Table 3. Common set of parameters Interference

-60 dB

Received signal / Noise (SNR)

-3.0 dB

AWGN noise

4 *10-9 watts

Eb/No (Overall)

6.01 dB

BER (Bit Error Rate)

0.001

Data Rate (Downlink)

64 kbps

8.1 Static propagation condition As defined in 3GPP TS 25.101 V6.3.0, the propagation condition for a static performance measurement is an Additive White Gaussian Noise (AWGN) environment. No fading and multi-paths exist for this propagation model.

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 2, February 2010

134

Table 2, lists the received values of BLER and FER for this propagation condition. Table 4. Parameters for static propagation conditions BLER (Block Error Rate)

0.034

FER (Frame Error Rate)

0.0923

8.2 Multi-path fading propagation conditions Multi-path fading normally follows a Rayleigh fading pattern. In this simulation Case-2 as mentioned by TS 25.101 is used with frequency band-1 (2112.5 MHz) and the number of paths set to 3 with relative delay between each paths of 0, 976 and 20000 ns and a mean power of 0 dB for all three paths. The delay model used in this case is fixed. The vehicle speed is configured to be 3 km/h. The received values of BLER and FER are given in Table 3. Table 5. Parameters for multi-path fading propagation conditions BLER (Block Error Rate)

Generated error patterns are applied to the data transmitted from the RLC layer. Different RLC modes are simulated using the C programming language.

9. Simulation Results As mentioned before VQMP is used as the quality measurement metric in this simulation. Table 6 presents the VQMP values obtained during the simulations. The following conventions are used in Table 6. Video clips names: - Mother and Daughter – MD - Highway – HW - Foreman – FM VQMP without UEP and without NEWPRED – Results A VQMP with UEP and without NEWPRED – Results B VQMP without UEP and with NEWPRED – Results C VQMP with UEP and with NEWPRED – Results D Note the lower the VQMP value the better the subjective image quality

0.007

Table 8. Simulation results FER (Frame Error Rate)

0.0225

8.3 Moving propagation conditions The dynamic propagation conditions for this environment for the test of the baseband performance is a non-fading channel model with two taps as described by 3GPP TS 25.101. One of the taps, Path-0 is static, and other, Path-,1is moving,. Both taps have equal strengths and phases but unequal time difference exists between them. The received values of BLER and FER are given in Table 4. Table 6. Parameters for moving propagation conditions BLER (Block Error Rate)

0.031

FER (Frame Error Rate)

0.088

Video Clip

Results A

Results B

Static Environment MD 0.70 0.46 HW 0.64 0.36 FM 0.85 0.83 Multi-path Environment MD 0.27 0.18 HW 0.37 0.21 FM 0.62 0.53 Moving Environment MD 0.63 0.45 HW 0.55 0.35 FM 0.83 0.70 Birth-Death Environment MD 0.58 0.32 HW 0.49 0.40 FM 0.85 0.72

Results C

Results D

0.45 0.32 0.74

0.25 0.19 0.68

0.18 0.18 0.47

0.16 0.13 0.34

0.44 0.31 0.67

0.23 0.26 0.52

0.35 0.31 0.68

0.27 0.22 0.45

8.4 Birth-Death propagation conditions These conditions are similar to the Moving propagation condition except, in this case both taps are moving. The positions of paths appear randomly and are selected with an equal probability rate. Table 5 lists the received values of BLER and FER for this propagation condition. Table 7. Parameters for birth-death propagation conditions BLER (Block Error Rate)

0.037

FER (Frame Error Rate)

0.0851

The following observations can be made on the above results. The received VQMP values for all video sequences are improved by implementation of the UEP and NEPWRED methods. It can be observed that in most cases the VQMP values show greater improvement with implementation of the NEWPRED method than the improvements obtained with the UEP method. Errors in the wireless environment occur in bursts and are random in nature. Due to the time varying nature of the wireless environment it is not possible to predict the exact location of the error. The NEWPRED method provides protection to the P-frames while UEP is aimed at protecting the I-frames. As P-frames are much more frequent than I-frames in the encoded video sequence, P-frames are more susceptible to these bursty randomly distributed errors. This explains why

(IJCNS) International Journal of Computer and Network Security, 135 Vol. 2, No. 2, February 2010

the NEWPRED implementation gives better performance than the UEP implementation. Combined implementation of the UEP and NEWPRED methods always outperforms a single method implementation and gives the best quality of received video. The same observation is made in terms of performance assessment using the VQMP score in most cases. Objective assessment of VQMP scores is highly dependent on bit-by-bit comparison which does not take into consideration the number of frozen frames. In most cases frozen frames give better correlation with original frames compared to frames with propagation errors.

10. Network capacity improvements

and

Now in second run, the Eb/No value is decreased by 1 dB to 3.6 dB and the total throughput per cell is presented in graphical form in Error! Reference source not found..

coverage

As shown by the simulation results above, implementation of UEP and NEWPRED enhances the received video quality. This implies that the same video quality or QoS can be attained at lower Eb/No if the two proposed methods are utilized. A reduced Eb/No value have a direct impact on radio network coverage and capacity planning for UMTS networks Error! Reference source not found., Error! Reference source not found.. A number of variable parameters determine the resultant received video quality. The nature of the video clip, the encoding parameters, channel propagation environment and many more variables can have varying effects on the overall video quality. For these reasons it is not easy to quantify exactly how much reduction in Eb/No value would be achieved using the techniques described above. For the simulation purpose, a reduction of 1 dB is assumed. Network capacity and coverage enhancements are quantified assuming 1 dB reduction in the required Eb/No value for video service. The basic radio network used for this simulation is shown in following Figure. 10 Node-B sites each with three sectors are simulated. 2000 UEs are randomly distributed over an area of 6 × 6 km2. The UMTS pedestrian environment for the micro cell is selected as a propagation model. The mobiles are assumed to be moving at a speed of 3 km/h. Downlink bit-rate of 64 kbps is selected and other link layer parameters and results are imported from simulations discussed above. In the first run, an Eb/No value of 4.6 dB is used and the total throughput per cell is represented in graphical form as shown in Error! Reference source not found..

Figure 8. Throughput in downlink per cell at Eb/N0 – 3.6 dB As can be clearly compared from Error! Reference source not found. to Error! Reference source not found., the decrease in the Eb/No value by 1 dB results in a significant increase in the total throughput per cell in the downlink direction.

11. Conclusions As can be seen from the simulation results, implementation of UEP and NEWPRED results in significant improvements on the received video quality. Improvements achieved using these methods provides extra margin for network operators to increase capacity. With implementation of these error concealment methods the same video quality can be achieved using a lower Eb/No which in turn provides flexibility for network operator to increase the number of users. This implementation obviously requires some processing overhead on both the encoder and decoder sides, but considering the increasing processing power of mobile stations, this should present a major obstacle and the error concealment methods described should provide considerable enhancements.

References: [1] International Standard ISO/IEC 14496-2: Information Technology - Coding of audio-visual objects-Part 2, Visual, International Organization for Standardization. 2001. [2] 3GPP, Technical Specification Group Radio Access Network; Radio Link Control (RLC) protocol specification; 3GPP TS 25.322, V4.12.0. [3] Pereira, F., Ebrahimi, T., The MPEG-4 Book; Prentice Hall PTR (July 20, 2002), ISBN-10: 0130616214.

Figure 7. Throughput in downlink per cell at Eb/N0 – 4.6 dB

[4] 3GPP, Technical Specification Group Radio Access Network;Radio interface protocol architecture; 3GPP TS 25.301 (2002-09), Ver 5.2.0.

(IJCNS) International Journal of Computer and Network Security, Vol. 2, No. 2, February 2010

136

[5] 3GPP, Technical Specification Group Services and System Aspects; Quality of Service (QoS) concept and architecture, 3GPP TS 23.107 v4.0.0. [6] 3GPP, Technical Specification Group Radio Access Network; MAC protocol Specification; 3GPP TS 25.321, V4.0.0. [7] 3GPP, Technical Specification Group Radio Access Network; PDCP protocol specification; 3GPP TS 25.331 (2003-12), Ver 6.0.0. [8] 3GPP, Technical Specification Group, Radio Access Network; Broadcast/Multicast Control BMC; 3GPP TS 25.324, v4.0.0. [9] Worrall, S., Sadka, A., Sweeney, P., Kondoz, A., Backward compatible user defined data insertion into MPEG-4 bitstream. IEE Electronics letters. [10] ATIS Technical Report T1.TR.74-2201: Objective Video Quality Measurement using a Peak-Signal-toNoise Ratio (PSNR) Full Reference Technique. October 2001, Alliance for Telecommunications Industry Solutions. [11] Wolf, S., Pinson, M., Video Quality Measurement Techniques. June 2002, ITS, NTIA Report 02-392. [12] RFC-3016, RTP Payload Format for Audio/Visual Streams, November 2000.

MPEG-4

[13] 3GPP, Technical Specification Group Terminals; Common test environment for UE conformance testing; 3GPP TS 34.108 (2003-12), Ver 4.9.0. [14] 3GPP, Technical Specification Group Radio Access Network; UE radio transmission and reception (FDD); 3GPP TS 25.101 (2003-12), Ver 6.3.0. [15] Laiho, J., Wacker, A., Novosad, T., Radio Network Planning and Optimization for UMTS, Wiley, 2nd edition (December 13, 2001). [16] Holma, H., Toskala, A., WCDMA for UMTS: Radio Access for Third Generation Mobile Communications, Wiley, 3rd edition (June 21, 2000).

Dr. Bhumin Pathak received his M.Sc. and Ph.D. degree from Oxford Brookes University, Oxford, UK. He has been working as Systems Engineer at Airvana Inc., since March 2007. Dr. Geoff Childs is Principal Lecturer at School of Technology at Oxford Brookes University, Oxford, UK. Dr. Maaruf Ali is Senior Lecturer at School of Technology at Oxford Brookes University, Oxford, UK.

Suggest Documents