Dynamic QoS Management for Multimedia Real-Time Transmission in Industrial Environments L. Almeida, R. Marau, P. Pedreiras DETI/IEETA-LSE Universidade de Aveiro 3810-193 Aveiro, Portugal {lda,marau,pedreiras}@det.ua.pt
J. Silvestre ITI/DISCA-EPSA Technical University of Valencia (UPV) Fdiz. y Carbonell 2, Alcoy 03801. Spain
[email protected] Abstract The use of multimedia within industrial applications has become commonplace, targetting improved process monitoring and machine vision. In both cases, the multimedia information is commonly distributed and subject to time constraints that must be met across networks while avoiding untolerable interference over typical control flows. This can be achieved with Constant Bit-Rate (CBR) channels, typically supported by real-time protocols. However, the compressors used to reduce the amount of information to transfer generate Variable BitRate (VBR) patterns. Adapting a VBR source to a CBR channel requires specific care in order to avoid wasting channel bandwidth or dropping video frames. This paper focuses on MJPEG transmission over Ethernet and proposes a bidimensional adaptation using the JPEG quantification factor q on the source side and frame acquisition/transmission period T on the source/network side respectively, using the FTT-SE protocol and its support for dynamic QoS management. The paper also shows several experiments with pre-recorded video streams that illustrate the advantages of the proposed approach.
1. Introduction The advances recently verified in processor and network performance have consistently contributed to enlarge the applicability range of multimedia-based applications, which now include industrial uses like machine vision, automated inspection, object tracking, vehicle guidance, etc. [10, 23]. One typical property of multimedia information, with impact both on transmission and storage, is its bandwidth variability, caused by the use of compressors that generate a Variable Bit Rate (VBR) information stream pattern. However, guaranteeing the desired levels of Quality-of-Service (QoS) is difficult for this kind of pattern because of frequent changes on the level of resources required by each individual stream, such as variations on network or CPU bandwidth, thus leading to
1-4244-0826-1/07/$20.00 © 2007 IEEE
interference between different streams or even between each stream and other activities carried out in the system. This difficulty became particularly challenging with the recent emergence of industrial distributed multimedia applications that typically impose reliability and timeliness requirements that cannot be fulfilled by standard network protocols [28] due to lack of temporal isolation and thus leading to unbounded mutual interference between streams. To achieve adequate QoS guarantees, real-time networks must be used, which offer Constant Bit-Rate (CBR) communication channels that have reserved bandwidth, e.g., ATM, PROFINET-IRT, ControlNet, Interbus, or Flexible Time-Trigged (FTT) networks [6]. However, matching a VBR source to a CBR channel is not trivial and may lead to waste of bandwidth or rejection of video frames. This problem is even more severe when several streams are transmitted over the same network. In this paper we propose taking advantage of the dynamic QoS management features of the FTT over Switched Ethernet protocol (FTT-SE) to carry out the referred adaptation with MJPEG video streams. In particular, we propose managing in an integrated way both the compression parameters and the frame acquisition period, which drive the encoding of the bit rate that must fit strictly inside the network bandwidth allocated to each stream. On the other hand, the channel bandwidth, by means of the frame transmission period, is adapted on-line according to the current needs of the whole set of channels currently supported by the system. These needs take into consideration the setup and tear down of channels as well as structural changes within each stream. This approach facilitates the compression procedure and attempts to maximize the QoS level provided by each channel considering the current total bandwidth requirements, thus providing an efficient way to share the network bandwidth among different streams. The paper is organized as follows. After the related work shown in section 2, the following section presents the system architecture considered in this paper. Section 4 addresses the QoS management aspects in multimedia, covering the so-called R(q) model, the content scal-
1473
Table 1. Main properties of encoders [16] Property MJPEG MJPEG2000 MPEG41 Motion No No Yes Compensation VBR Yes Yes Yes CBR Support No Yes Yes Latency Low Low to Medium medium to High Blocking Yes No Yes Artifacts Relative Cost 1x 3x 2x
ing, and the QoS assessment. An experimental section follows, which presents a set of experiments carried out to assess the performance of the methodology presented herein. Finally, the conclusions close the paper.
2. Related work Different compression standards (see tab. 1) , such as JPEG (with baseline, progressive, hierarchical and lossless profiles) [2], JPEG2000 [21], MPEG-2 [1], H.263 [3] and MPEG-4 (part 2 [4] and part 10 [5], also known as H.264 or MPEG-4 AVC) allow the system to cope with the requirements posed by the different classes of applications. These standards can be classified in two main types: still image compressors, which exploit the image spatial redundancy, and video compressors, which exploit the temporal redundancy that exists in sequential images. The general algorithm of the compression process is shown in fig. 1. The most important factors in this process [14] are the coding bit rate R and the quality obtained DT (Distortion), both of which depend on the quantification factor q used and on the acquisition parameters like acquisition period T or resolution r, among others. Multimedia transmission over the Internet has been subject of intense investigation in the recent years [8, 29]. Typical solutions are based on the TCP/IP protocol stack complemented by other protocols, e.g., RTP/RTCP (Real Time Protocol/Real Time Control Protocol [12]), or RTSP (Real Time Streaming Protocol [13]) or SIP (Session Initiation Protocol [19]), which measure key network parameters like bandwidth usage, packet losses, round-trip delays, and control the load submitted to the network. The main drawback for these technologies concerning their use for industrial communications is the latency introduced by them. For example, smoothing video algorithms [24] use memory buffers between the producer and the consumer to smooth out the bit rate variations. The estimation of the required bandwidth and amount of buffers can be done offline, for stored video, or based on a number of images buffered before their transmission, for live (non interactive) video streams. The quality of results are highly dependent on the delay admitted. The use of motion JPEG (MJPEG) using JPEG baseline is generally found only in digital cameras and monitoring systems (e.g., Lumera, Cast, Axis [7], indigovision [15], Mobotix [22]). The use of video compressors can potentially reduce the storage and bandwidth requirements with respect to MJPEG by ratios as high as 10 to 1. However, in monitoring systems the images are frequently
1
This column is MPEG4-part 2. The MPEG4-part 10 has a bigger latency, at higher cost, although with better performance.
captured at low rates and sometimes multiplexed, which reduces or even eliminates the temporal redundancy and thus seriously compromises the efficiency of these compression algorithms. Concerning the use of JPEG2000, it was expected in 2001 that this compressor would replace JPEG in 2 or 3 years. However, it has some drawbacks that have made this transition more difficult. Among other aspects, it is not backward compatible and is more complex, thus implying a higher memory footprint and nearly triples the time required to perform the compression and decompression process. Also, the perceived quality increase is only noticeable in extreme compression levels, and thus at moderate to low compression levels the higher processor demand does not pay off. The licensing of some codecs, such as JPEG2000 or H.264, also limits the adoption of this technology. A possible approach for adapting a MJPEG stream into a CBR channel consist in searching the right q value that generates the largest bit rate R(q) that fits within the channel capacity. However, these are iterative algorithms that increase the latency and, when facing structural changes [9], may require 3 or more iterations to reach the final q value. Another class of techniques used to adjust the MJPEG stream transmission to CBR channels, called content scaling, manages parameters such as the resolution or frame rate (discarding frames) to bound the bandwidth consumption [33]. All these methods have a noticeable negative impact on the latency, jitter or distortion, which are parameters that have a highly negative impact on the performance of many industrial applications. In previous work [25] the authors proposed a feedback adaptation based on the R(q) model to adjust the multimedia streaming to a constant bandwidth assignament, with a low latency. The results lead to an increase in quality, a reduction in number of dropped frames, and a more efficient use of the assigned network bandwidth. However, the R(q) model eficiency falls significantly for higher q values, and cannot adapt R efficiently when the channel assigment changes dinamically. These difficulties were the motivation for this work, extending [25] with an integrated management of the quantization factor q and the frame acquisition/transmission period T that copes with a
Figure 1. Generic encoding process 2
1474
dynamic adaptation of the communication channels, e.g., as enforced by network QoS management policies.
3 System Architecture The scenario addressed in this paper is a generic monitoring industrial application (see fig. 2), where sources (P: Producer) send images through a local area network to one or more destinations of the same streams (C: Consumer). Besides the video streams, the network may also support other sources of traffic, potentially with strigent realtime requirements, e.g., related with real-time control. The system includes a QoS manager, which receives the requests from the different system nodes and performs the bandwidth allocation to several streams according to some specific algorithm. The output of the QoS manager is the bandwidth allocation assigned to each stream wi , being then each node responsible for satisfying this resource allocation, that is, Ri < wi . The algorithm to assign w may depend on different factors like the instantaneous network load, the priorities assigned to each stream, the acceptable bandwidth and quality bound assigned to each stream, etc. This type of communication can be effectively supported by the FTT-SE [20], protocol, which enables dynamic QoS management with real-time guarantees [17]. Each multimedia producer i is associated with a stream composed by a succession of frames. The j th frame has size fij and it is compressed with the quantification level qij . To support the QoS management in a controlled and predictable way, the application specifies ranges for the acquisition period through the lower Til and upper Tiu limits ( (Til ≤ Ti ≤ Tiu ) and the same for the quantification factor (qil ≤ qi ≤ qiu ). Each stream has a priority P ri which can be used by the QoS manager for distributing the bandwidth asymmetrically among the different streams when the bandwidth is not enough to satisfy all lower QoS requests. Finally, each stream is also associated with a resolution ri , a deadline Di and a buffer Bi that allows storage of one image. Therefore, each producer Pi is associated with a multimedia stream characterized by Mi = {ri , qil , qiu , Til , Tiu , P ri , Di , Bi }. Each node includes a QoS sublayer that is in charge of implementing the algorithm to adapt the source stream to the bandwidth assigned wi , e.g., discarding frames, changing qi , adapting Ti , or both, whenever the channel state changes significantly. This sublayer has to satisfy the latency and bandwidth restrictions. The bandwidth restriction is fij = Ri ≤ wi , ∀i, ∀j Ti
Figure 2. System architecture
L < Di
(2)
This latency L depends on several parameters of the different actors involved. Thus, in the producer we have the image acquisition time Tacq , the time spent in compression Tcod and the time spent in the verification and adjustement of QoS parameter Tqos . Concerning the network we have the time spent in the communication, the transmission time Ttr , which also includes the fragmentation, defragmentation and queuing. In the consumer side we have the time used for decompressing the image Tuco , and the visualization time in the Consumer application level Tvop . Thus, an upper bound of the latency can be calculated as: L = Tacq + Tcod + Tqos + Ttr + Tuco + Tvop
(3)
It is important to point out that the compression and decompression times are the strongest factors in the latency. This fact constrains the choice of the encoder and particular care must be taken when several streams are targeted to a single consumer, increasing Tuco significantly.
4
QoS Management
4.1 R(q) Model The R(q) model permits obtaining the q value needed to generate a frame with target coding bit rate RT . Knowing T it becomes possible to adapt q to the w assigned in each moment to satisfy the bandwidth restriction 1. One of the models with better results [11] defines R(q) as: R(q) = α +
β qλ
(4)
(1)
where α and β are parameters of a curve in which λ regulates the curvature 1 . Each frame has its own model
where fij depends on ri and on the qij used in the j th frame. Also, the Latency L for image transmision must fulfill
1 The model in eq. 4 was developed for MPEG, where q varies from 1 (lowest compression) and 31 (highest compression). Instead, as typical with JPEG, we consider q varying from 1 (highest compression) to 100 (lowest compression).
3
1475
(αij ,βij ,λji ). There are algorithms to obtain R(q) [18] which are more accurate than eq. 4, but their calculation requires successive compressions with various q values (from 5 to 8 compressions), increasing the latency and computing overhead. In a monitoring application with fixed cameras it can be assumed that the images acquired have a strong similarity between them. Therefore, it can be assumed that for a stream i, ∀jαij = αi and ∀jλji = λi and thus we reduce the model for each frame to (αi ,βij ,λi ). Eq. 4, now, establishes a relationship between βij and qij for a given bit rate R(q), which we use to perform the adaptation of the quantification factor q. For a given producer with an assigned bandwidth wi we define a target coding bit rate RiT = wi − 2δ, being δ a variation margin, and a target size window [wi − 3δ, wi − δ]. At a given point in time we have a frame k with Rik that falls within the target size window (condition 5). wi − 3δ ≤ Rik ≤ wi − δ
Figure 4. Error in frame size when 4qe =1
4.2 Content Scaling Multimedia quality has a multidimensional nature and it is commonly agreed that key quality dimensions/aspects should be identified for the application in use [32]. To adapt the transmission of multimedia content to the available resources there are two main approaches: transmoding, which converts the content from one modality to another [27] (for example, from video to text), and content scaling, which changes image parameters in the temporal and/or quality domains [30]. In this paper we consider the latter, only. For content scaling, despite the good results in adapting video transmission to a CBR channel [25], the model and the hypothesis described in section 4.1 introduce an error. While the error value is typically acceptable in the flat part of the R(q) curve, it becomes significant in the exponential part. This effect is shown in fig. 4, where the difference between the predicted and the effective frame sizes when 4qe = 1 are plotted for the range of q values. For example, for q ≤ 65 and 4qe = 1 we have an error in frame size below 1000 bytes, which represents near 200 kbps for T = 40ms. However, when q > 90 the error can reach 1 Mbps for the same 4qe and T . To attenuate this problem we propose extending the previous work and manage, in an integrated way, both the quality q and the frame period T for carrying out the content scaling process. For each frame we use the actual value of Ti and the R(q) model to select the qi inside [qil , qiu ]. If it is not possible, the frame is dropped and a new value of Ti inside [Til , Tiu ] is selected to repeat the qi search. If with the lower QoS the stream i can’t reach the Ri to satisfy 5, then the system will negotiate with the network QoS manager a new wi and recalculate the qi and Ti value accordingly. Fig. 5 sketches, for a given stream, the value of R(q) for q ∈ [60, 90] and T ={200, 100, 80, 60, 40}(ms). As it can be easily recognized, there are several possible configurations, i.e., (q, T ) pairs, that permit achieving a given R. For example, if Ti = 80ms and the QoS manager has assigned wi = 600kbps the quality parameter can be q ∈ [60, 72]. Another possible configuration would be using a frame period of Ti = 120ms with a higher quality parameter, in this case q ∈ [82, 86]. In the latter
(5)
Then, the next frame (k + 1) is computed. If condition 5 is still met, then qik+1 = qik , which is used to generate frame k + 2. Otherwise, we adapt the quantification factor qik+1 based on the variation of R(q) between frames k and k+1, i.e., ∆Rk,k+1 . Fig. 3 shows this process. Notice that if Rik+1 > wi , i.e., coding bit rate larger than the channel bandwidth assigment, then the frame is discarded. The calculation of the new qik+1 factor is carried out using the following sequence of operations, firstly computing β k+1 β k+1 = ∆Rk,k+1 q λ + β k
(6)
Rik+1
and then considering the coding bit rate obtained, we calculate the q k + 1 that will be use in next frame: qik+1 = (
Rik+1 − α (1/−λ) ) β k+1
(7)
The difference between the estimated qik+1 and the qi that would generate a perfect match with the channel bandwidth assigned wi is called 4qe .
Figure 3. R adaptation
4
1476
case the higher quality value would result in higher q errors, increasing the probability of dropping frames. On the other hand, if, due to a network reconfiguration, the QoS arbitrator changes wi to 200 kbps, maintaining the same Ti would result in very low q values, possibly out of the allowed range [qid , qiu ]. In this case the system could reduce the frame period to Ti = 200ms, resulting in a q ∈ [55, 66], thus reducing the frame rate but maintaining the image in the user-specified quality levels. Another possible scenario could happen when the QoS arbitrator raises wi , for example, to 1.2 Mbps. This could also result in values of q out of the specified range, in this case by excess. To avoid the problems mentioned before with the R(q) model, the system can, in this case, decrease the frame period to Ti = 40ms, maintaining the q ∈ [65, 72] and avoiding higher q values, thus providing an adequate image quality and a higher frame rate. Fig. 6 shows the relationship between T , f and R, considering the maximum f that allows fulfilling condition 1.
Figure 6. Relationship between T , f and R
4.3 Measuring the Quality of Service The Mean-square-error (MSE) and peak-signal-tonoise ratio (PSNR) are the quality metrics most frequently used to evaluate the performance of codecs and video transmission systems. However, their capacity to match up the perceived degradation as well as the human vision factors is poor. There are numerous attempts to include characteristics of human visual systems (HVS) in objective quality assessment metrics. These attempts try to get a numerical method with a good correlation to subjective methods. In the subjective methods, test sequences are presented to instructed non expert observers, in a controlled environment, which perform an evaluation according to predefined quality scales (ITU-R BT.500), obtaining the MOS (Mean Opinion Score) value.
Figure 7. Example of images in the streams used
The method proposed by Z. Wang [31] uses the structural distortion measurement instead of the error, since the HVS is highly specialized in extracting structural information, and not in extracting the errors. Being f the original image, and g the distorted one, the image quality index
(QI) can be calculated as: QI =
σf g 2fˆgˆ 2σf σg σf σg fˆ2 + gˆ2 σf2 σg2
(8)
where fˆ and gˆ are the intensity mean, σf and σg its variance and σf g the covariance. The range of values for the index Q is [-1,1], being Q=1 when the images are identical.
5 Experimental Results For assessing the performance of the two-dimensional content scaling technique presented in this paper we have used streams obtained in car factory (CF) and rubber factory (RB) environments [26]. Fig. 7 presents some frames of those streams. All of the obtained streams have
Figure 5. R(q) for different Ti
5
1477
Table 2. Statistical properties of the streams CF1 q=20 q=80 CF2 q=20 q=80 CF3 q=20 q=80 RB1 q=20 q=80 RB2 q=20 q=80
ˆf (bytes) 22738.81 66005.12 ˆf 17225.75 53131.3 ˆf 18949.27 57891.62 ˆf 15869.04 49716.50 ˆf 19493.32 56190.58
desv. 812.39 2115.09 desv. 673.58 1834.52 desv. 830.71 2033.94 desv. 195.65 575.55 dev. 569.46 1363.56
max. 33579 94867 max. 29033 85718 0 max. 25142 75771 max. 16303 50933 max. 20941 61264
Ri (Mbps) 4.87 14.0 Ri 3.71 11.4 Ri 4.12 12.4 Ri 3.24 10.2 Ri 4.13 11.8
Table 3. Obtained Quality Index (QI) Stream(q) T=40 80 120 200 1000 CF1(20) 0.870 0.851 0.839 0.824 0.779 CF2(20) 0.867 0.855 0.848 0.841 0.814 0.859 0.832 0.819 0.801 CF3(20) RB1(20) 0.873 0.866 0.863 0.859 0.845 0.884 0.875 0.869 0.837 RB2(20) CF1(80) 0.953 0.918 0.900 0.880 0.825 0.945 0.916 0.903 0.889 0.853 CF2(80) CF3(80) 0.944 0.900 0.879 0.854 RB1(80) 0.944 0.920 0.909 0.899 0.877 RB2(80) 0.956 0.934 0.924 0.913 0.882
Figure 8. QI evolution for 100 frames of streams CF1 and RB1 with q=80
pronounced for RB1 than for CF1 and CF3. This effect can also be observed in fig. 8. While for RB1, the lowest value of QI is frequently similar with T = 1000ms and T = 200ms, for CF1 this difference is much more noticeable. To better ilustrate the experiment we have defined three QoS classes according to the operating period during the experiment. In class 1 is included CF1 with a period TCF 1 = {40} and priority P rCF 1 = 0.6. In class 2, both CF2 and CF3 were assigned, where the acceptable periods are TCF 2 = TCF 3 = {40, 80, 120} and the priorities are P rCF 2 = 0.1 and PCF 3 = 0.2. Finally, for class 3, RB1 and RB2 were included, with TRB1 = TRB2 = [80, 1000] and priorities P rRB1 = PRB2 = 0.05. For all the streams, q is in the range [10, 80]. Higher P ri values correspond to an higher priority. The evolution of the QoS parameters during the experiment is depicted in fig. 9. The Y axis shows the global W value and how it is distributed between the active streams at each moment in time (rectangles). Within each rectangle we can find the values of Ti and of the frame size fi used to adapt to the wi allocated by the QoS manager, using the R(q) model as described in section 4.1. The QoS delivered to the streams depends both on the network load and on their relative importance. The highest priority stream (CF1) allways gets the higher QoS. Before t = 20s CF2 receives all the remaining bandwidth. At t = 20s a new stream of the same QoS class as CF2 (CF3) is activated and thus part of the CF2’s resources are used to
ri = 640x512 pixels, Ti = 40ms, and are 4000 frames long (160 seconds). Table 2 summarizes the statistical properties of those streams (CF1, CF2, CF3, RB1, RB2) for quantization levels q = 20, 80 and T = 40ms. CF streams are characterized by a greater deviation parameter (desv) when compared with the RBs, caused by a greater variability in the scene environment (welding sparks produce variation peaks). Note that, as the streams are pre-recorded, the allowed values for Ti ∈ [Til , Tiu ] must be integer multiples of the original frame period (Ti = 40ms) to maintain the correct dynamics of the streams. With live video it is generally possible to define the frame acquisition period with much higher granularity. Table 3 shows the QI obtained for each stream, using a constant q (between parenthesis) and different frame periods (expressed in ms). When T is greater than 40ms, corresponding to occurrence of skipped frames, it is used the last frame received to perform the computation of QI. The resulting degradation in QI depends both on the number of frames skipped and on the scene variability. As expected from the deviation parameter, Stream RB1 exhibits less variability than streams CF1 and CF3 and thus the degradation of QI when Ti increases is significantly less 6
1478
Figure 9. Ri distribution assigned by the QoS manager
support CF3. Latter on, at t = 40s RB1 and RB2 are admitted. In this case CF1 QoS is also reduced since the QoS reduction of CF2 and CF3 did not released enough resources to accommodate RB1 and RB2. A similar behavior can be observed in the rest of the experiment, with the QoS being reallocated as the different streams enter and leave the system. The performance of the dynamic QoS management technique is also compared with a static channel environment, in which the global bandwidth (W = 20M bps) is divided equally in 5 channels of wi = 4M bps each, and for all streams Ti = 80ms, fi0 = 40KB and P ri = 0.2, independently of whether the stream i transmission is required or not at a given instant. Fig. 10 shows the evolution of the frame size and QI for each one of the streams, both according to the dynamic and static scenarios. Globally, the QI values obtained in the dynamic model are higher than in the static model, except with the streams RB1 and RB2, which were assigned lower priority. However, this does not mean a real reduction in the global QoS perceived, given the relative priorities and actual T values. Therefore, in order to make an effective global comparison of the QoS obtained with the 5 streams through the static and dynamic method, we considered the values of QI weighted by the respective P r values as in eq. 9. This factor reflects the different impact that the distinct streams have in the global system performance. n X QoS = P ri QIi (9)
Figure 10. Dynamic adaptation scenario with frame size and QI evolution
agement scheme, which lead to an increase in the global QoS indicator when using the dynamic adaptation instead of static channels.
6
Conclusion
Using multimedia streams in real-time applications requires appropriate support from the underlying network. A common technique has been to allocate CBR channels to different streams, which favour temporal isolation. However, multimedia streams are intrinsically of the Variable Bit-Rate type. In a previous work the authors have proposed a scheme to adapt the quantization factor of MJPEG encoders to generate relatively constant frame sizes, improving the channel utilization. However, during several periods of operation, more bandwidth is avail-
i=1
Thus, in the static scenario we obtain a QoSstatic = 0.90 and for the scenario with dynamic QoS management we obtain QoSdynamic = 0.9133. These results indicate a correct operation of the R(q) model and of the QoS man7
1479
able in the network and thus the channel could be temporarily enlarged. Also, if several channels are currently using the full network bandwidth, it might happen that some channels are currently under-utilized while others are over-utilized. In such case, the current bandwidth allocation to the channels could be dynamically adjusted to the actual needs of each stream. In this paper, the authors have proposed a bidimensional adaptation mechanism by adding a QoS manager to the previous quantization factor adaptation. The QoS manager allows to dynamically change the channel bandwidth according to the effective stream needs and overall available bandwidth. This adaptation mechanism is compared against a corresponding situation with static CBR channels using a set of stored video sequences from industrial environments. Such comparison shows the superiority of the dynamic adaptation mechanism. Moreover, such adaptation is carried out with reserved channels, thus maintaining the temporal isolation feature among streams, except for the interference caused by the QoS management itself.
[13] H. Schulzrinne et al. Real Time Streaming Protocol (RSTP). Audio-video Transport working group, RFC 2326, Apr. 1998. [14] Z. He and S. K. Mitra. A unified rate-distortion analysis framework for transform coding. IEEE Trans. Circuits Syst. Video Technology, 11(12):1221–1236, Dec. 2001. [15] Indigo. IP cameras. http://www.indigovision.com/feature articles/ip-cameras us-apr06.pdf. 2006. [16] B. Jentz. Low-cost solutions for video compression systems. Global Signal Processing Expo, CA, October 2005. [17] L. Almeida et al. On-line QoS adaptation with the Flexible Time-Triggered (FTT) communication paradigm. Handbook on Real-Time and Embedded Systems, I. Lee, J. Y. Leung, S. Son (ed). CRC Press, ISBN: 9781584886785, July 2007. [18] L. Lin and A. Ortega. Bit-rate control using piecewise approximated rate-distortion characteristics. IEEE Trans. Circuits Syst. Video Technol., 8:446–459, August 1998. [19] M. Handley et al. SIP: Session initiation protocol. Networking working group, RFC 2543, March. 1999. [20] R. Marau, L. Almeida, and P. Pedreiras. Enhancing realtime communication over cots ethernet switches. Proc. WFCS, 2006. [21] M. Marcellin, M. Gormish, A. Bilgin, and M. Boliek. An overview of JPEG-2000. Proc. DCC, pages 523–541, March 2000. [22] Mobotix. What IP did next. Security Installer, July 2006. [23] V. Sempere and J. Silvestre. Multimedia applications in industrial networks: Integration of image processing in profibus. IEEE Trans. on Industrial Electronics, 50(3):440–448, June 2003. [24] S. Sen, J.L-Rexford, J. Dey, J. Kurose, and D. F. Towsley. Online smoothing of variable-bit-rate streaming video. IEEE Trans. on Multimedia, 2(1):37–48, March 2000. [25] J. Silvestre, L. Almeida, R. Marau, and P. Pedreiras. MJPEG real-time transmission in industrial environments using a CBR channel. Transactions on Engineering, Computing and Technology, 16:149–154, November 2006. [26] J. Silvestre, V. Sempere, and T. Albero. Industrial video sequence for network performance evaluation. Proc. WFCS, 2004. [27] T. Thang, Y. Jung, and Y. Ro. Modality conversion for QoS management in universal multimedia access. IEE Proc.Vis. Image Signal Process., 152(3):374–384, 2005. [28] J.-P. Thomesse. Fielbus technology in industrial automation. Proceedings of the IEEE, 93(6):1073–1101, 2005. [29] B. Vandalore, W. Feng, R. Jain, and S. Fahmy. A survey of application layer techniques for adaptative streaming of multimedia. Real-Time Imaging, (7):221–235, 2001. [30] A. Vetro, C.Christopoulos, and H. Sun. Video transcoding architectures and techniques: an overview. IEEE Signal Process. Mag., 20(2):18–29, 2003. [31] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600– 612, April 2004. [32] A. Watson and M. Sasse. Measuring perceived quality of speech and video in multimedia conferencing applications. Proc. ACM Multimedia, pages 55–60, Bristol, UK 1998. [33] Z. Zhang, S. Nelakuditi, R. Aggarwal, and R. P. Tsang. Efficient selective frame discard algorithms for stored video delivery across resource constrained networks. INFOCOM, pages 472–479, 1999.
7 Akcnowledgements This work was supported by the MCYT of Spain under the project TSI2006-13380-C02-02, and partially funded by UPV through PAID-00-07.
References [1] Generic coding of moving pictures and associated audio information-part 2, video. ISO/IEC DIS 13818-2, May 1994. [2] Information technology-digital compression and coding of contiuous-tone still images: Requeriments and guideliness. ISO/IEC 10918-1, Feb. 1994. [3] Video coding for low bit rate communications. ITU-T Recommendation H.263, version 1, April 1995. [4] Coding of audio-visual objects - part 2: Visual. ISO/IEC 14496-2 (MPEG-4 Visual Version 1), April 1999. [5] Coding of audio-visual objects - part 10: Advanced video coding (AVC). ISO/IEC 14496-10. ITU-T recommendation H.264 AVC for generic audio visual services. 2003. [6] L. Almeida, P. Pedreiras, and J. Fonseca. The FTT-CAN Protocol: why and how. IEEE Trans. on Ind. Elec., 49(9):1189–1201, 2002. [7] Axis. Digital vid. comp. Axis Communications, June 2004. [8] B. Bouras and A. Gkamas. Multimedia transmission with adaptative qos based on real-time protocols. Int. Journal of communication systems, (16):225–248, 2003. [9] A. Bruna, S. Smith, F. Vella, and F. Naccari. JPEG rate control algorithm for multimedia. IEEE Int. Symp. on Consumer Electronics, pages 114–117, September 2004. [10] D. Dietrich and T. Sauter. Evolution potentials for the fieldbus system. Proc. WFCS, pages 343–351, 2000. [11] W. Ding and B. Liu. Rate control of MPEG video coding and recording by rate-quantization model. IEEE Trans. Circuits Syst. Video Technolgy, 6:12–20, February 1996. [12] H. S. et al. RTP: A Transport Protocol for Real-Time Applications. Audio-video Transport working group, RFC 1889, Sept. 1987.
8
1480