Admission Control in Video Distribution Systems exploiting correlation among streams ∗
P. Camarda , D. Striccoli Politecnico di Bari – Dip. di Elettrotecnica ed Elettronica Via E. Orabona, 4 – 70125 Bari (Italy) E-mail:
[email protected] Tel. +39 080 5963642, Fax +39 080 5963410
Abstract - In this paper we propose various original algorithms for evaluating, in a video distribution system, a statistical estimation of aggregate bandwidth needed by a given number of smoothed video streams. Each video stream is characterized by the marginal distribution, approximated by a histogram derived from real video traces. A kind of correlation between the video streams is taken into account. The results of the algorithm proposed are compared with the results of the same algorithm utilized under the hypothesis of total independence of all the video streams and with some simulation results, discussing the pros and the cons of the considered solutions. Index terms-- Video Distribution, VBR traffic, Statistical admission control, Bandwidth Evaluation, Analytical and Simulation Model.
Preliminary draft
For submission to “SoftCOM 2001”, October 9-12, 2001
∗
Corresponding author
I. Introduction A large part of traffic in future broadband networks will be presumably generated by multimedia applications. In such systems, capital importance will play the issue of Quality of Service (QoS), which imposes strict constraints on admission control, bandwidth allocation, capacity planning, etc. In this paper, we have considered a video distribution system, which forms the core of applications like Video on Demand, Distance Learning, Internet video broadcast, etc.[1,2]. In particular, we suppose to distribute stored video, codified by standard MPEG algorithms, which, as well known, produce a Variable Bit Rate (VBR) traffic. As an example, in Figure 1a is reported the bit rate of the first 32000 frames of a MPEG-1 coded trace of the film “Star Wars”. Let us suppose that in the network a certain number of films is present. To guarantee the transmission of each stream on the network without losses, we should assign for every film the maximum of the bandwidth, but, because of the nature of the VBR traffic in which the maximum of the bandwidth is reached only in few time instants, a bandwidth assignment of this type would result in a waste of the total bandwidth assigned.
140
120
100
80
60
40
20
0 1
10001
20001
30001
Time in frame unit
Figure 1a. 32000 frames of the film “Star Wars” without smoothing.
However, if we suppose that all the starting points of the video traces are randomly displaced in time, it will be unlikely that every stream will reach the maximum bandwidth in the same instant. Instead it will be more probably that, in every time instant, some films will have a high bit rate, while some others a low bit rate. As a result of this consideration, we can define the aggregate bandwidth as the total bandwidth occupied by the totality of the video streams, for each time instant. As a consequence of what illustrated, the aggregate bandwidth will be generally lower than the sum of the peak rates of all the films present in the network and we can make our estimation of the occupation of the network with statistical considerations on the aggregate bandwidth. A first technique, illustrated in [2,3], is utilized to reduce the total amount of bandwidth assigned to all the video flows by smoothing the bit rate of the single VBR stream. This smoothing algorithm is based on the reduction of the peak rate and the variance of every stream present in the network, as can be seen in Figure 1b. In this figure is reported the bit rate of the same film “Star Wars” smoothed with a client buffer of 1 MB.
30
25
20
15
10
5
0 1
10001
20001
30001
Time in frame unit
Figure 1b. 32000 frames of the film “Star Wars” with smoothing (buffer size = 1 MB).
The technique utilized to reach this purpose consists in transmitting as long as possible pieces of the same film with a constant bit rate, that varies from piece to piece. The variations of the bit rate are done accordingly to a scheduling algorithm that assures the minimum variability of the bit rate between all the CBR pieces. In this way it can be demonstrated (see [2]) that the smoothing is the best possible. As can be inferred, the smoothing technique reduces consistently the rate variability improving significantly the bandwidth resources allocation on the network and exploiting also statistical multiplexing gains in the aggregate bandwidth. For this reason we suppose that all the films will be present on the network in a smoothed manner, as calculated by the smoothing algorithm. The consequence is that the rate marginal distribution of smoothed sequences is quite radical different when compared with original unsmoothed sequence. As an example, in figures 2a and 2b are reported the rate histograms relative to the figures 1a and 1b respectively. As highlighted in [2,3] a consistent gain in network resource utilization can be obtained by exploiting smoothing, which will likely be implemented in real systems. A significant problem in such systems results the aggregate bandwidth estimation needed for planning the transmission infrastructure and for admission control. Such a problem, in the literature, has been attacked by exploiting Chernoff bound, in the hypothesis of independence among video streams and bufferless systems [3]. This last hypothesis of bufferless network switch, can be validly justified by considering that the real-time distribution of smoothed video traffic imposes stringent delay requirements and as a consequence very small buffer can be employed in the intermediate network nodes. As a matter of facts, in this paper we model the network switch as a bufferless multiplexer. Various original algorithms for evaluating the aggregate bandwidth needed by N video streams present on the network are proposed. In particular, some algorithms of estimation of the total bandwidth occupied by a certain number of video streams have been developed. These algorithms, found in the literature, start from the simplifying hypothesis that all the video frames of all the video streams are present in a totally independent manner on the network. Anyway, the calculated bandwidth results are quite different if compared with the simulations and can also underestimate the needed bandwidth if some special conditions occur, namely all the video streams start into a very short time interval (see [3] for further details). To overcome this difficulty, in this paper we will suppose that there is a certain amount of correlation between the video streams. In particular, we will consider the temporal dependence between two video streams and, from an empirical analysis of the video streams that start all into a specified time interval, we will derive the conditional probabilities representing the amount of correlation between the video streams. The results found utilizing the algorithm of estimation of the aggregate bandwidth under these correlation assumptions will be compared with simulations and with the results obtained applying the same algorithm but under the hypothesis of total independence of all the video streams. Each stream is characterized by its marginal distribution approximated by a histogram derived from real video traces.
5000 4500 4000 3500 3000 2500 2000 1500 1000 500 0 0
12490
24980
37470
49960
62450
74940
87430
99920
112410
124900
Frame size (bytes)
Figure 2a. Frame Size distribution for the film “Star Wars” without smoothing.
8000 7000 6000 5000 4000 3000 2000 1000 0 0
2460
4920
7380
9840
12300
14760
17220
19680
22140
24600
Frame size (bytes)
Figure 2b. Frame Size distribution for the film “Star Wars” with smoothing (buffer size 1 MB).
This article is structured as follows. In section II the analytical model will be developed, under the assumption that in the network there is only one type of film. In section III the same model will be applied when in the network there are different types of film. In section IV we will present some numerical results, obtained comparing the algorithm results with the ones obtained under the total independence hypothesis and with the simulations. In section V some conclusions will be given, discussing the pros and the cons of this algorithm of estimation of the aggregate bandwidth.
II. Bandwidth estimation for homogeneous streams Let us discuss now the case in which there is a kind of correlation between the video streams present on the network. We know that in the case in which there is an amount of temporal correlation between the starting points of the video streams, the aggregate bandwidth calculated considering the video streams totally independent in time underestimates the aggregate bandwidth obtained with the simulations (see [3]). This assumption is generally valid in the case of a single type of video stream and in the case of different types of video streams. Let us consider first the case in which there is only one type of film in the network, and let us suppose that there are generically N films of the same type. To evaluate the aggregate bandwidth in correspondence of a fixed loss probability, we have to evaluate the state probability:
p( α1 ,α2 ,...,αN ) where the generic αi ( αi = 1,2 ,...K ) , for i from 1 to N, represents one of the K states in which the bit rate of the single film can be (corresponding to one of the K bit rates derived from the histogram method). The corresponding probability of being in that state will be pα . Like the case of total independence of the video streams, the following system has to be solved: K K Λ Λ P ( ≤ ) = ... ∑ ∑ p( α1 ,α2 ,...,αN ) S α = 1 α =1 (1) Λ r + r + ... + r ≤ α α α S that means that the probability to have a bandwidth lower than ΛS is given by the sum of all the combinations of the state probabilities that give an aggregate bandwidth lower than ΛS . Anyway, if now we consider a certain amount of correlation between all the video streams, we can not say that: i
1
1
N
2
N
NJ
p( α1 ,α2 ,...,αN ) = ∏ pα i =1
i
since this relation is valid only in the case of total independence of all the video streams [5]. To introduce a minimum of correlation between the video streams, we can suppose that there is a degree of dependence between two video streams of the same type using the following approximation: p( α1 ,α2 ,...,αN ) ≈ p( α1 | α2 ) p( α2 | α3 )... p( αN −1 | αN ) p( αN ) (2) This approximation surely doesn’t consider all the possible degrees of correlation of all the N video streams, but it is a first approximation to consider a certain amount of correlation between two video streams. To solve the (1) exploiting the (2), we have to find first the conditional probabilities p( αi | αi +1 ) through some simulations made on the video traces. In particular, we consider two video traces, with their starting points randomly displaced into a determined time interval, and, to guarantee the total superimposition of the two video traces, we consider that when one of the two video traces ends, it starts again from the first frame. Then we use the histogram method to determine the bit rate intervals and derive the probabilities p( α i ,α j ) ( α i ,α j = 1,2 ,...K ) counting the number of times in which the first film has rate rα and the second has rate rα or vice versa, and dividing the obtained i
j
result for the total number of frames of the two films. To finally obtain the conditional probability p( α i | α j ) , we have to divide the quantity p( α i ,α j ) for the marginal probability p( α j ) . It is evident that if for a determined value of j p( α j ) = 0 also the corresponding probability p( α i ,α j ) will be zero. Furthermore it will result p( α j | α j ) = 1 (this result derives directly from the probability theory; see [4]). Since the values of the probabilities p( α i | α j ) will change for each simulation (because the starting points of the two films are randomly displaced in time in every simulation), we will make a great number of simulations and, for each of them, the probabilities p( α i | α j ) will be calculated for each αi and α j . Finally, all the values of p( α i | α j ) will be averaged on the total number of the simulations. Obviously, if the total number of the simulation increases, the variance of the average values will decrease, giving us more precise values of the conditional probabilities. All the conditional probabilities will form a KxK matrix (with αi as row index and α j column index) that we call matrix of the conditional probabilities. Given this matrix as input data of our method, we have to solve the (1) exploiting the (2). So we can rewrite the (1) as follows:
K K P( Λ ≤ ΛS ) = ∑ ... ∑ p( α1 | α2 ) p( α2 | α3 )...p( αN −1 | αN ) p( αN ) α = 1 α =1 rα + rα + ... + rα ≤ ΛS Considering that the histogram bins of the source have all the same width, we can assume: r2 = 2 r1 , r3 = 3 r1 , ..., rK = Kr1 when r1 represents the width of the first histogram bin. So, we can say that in general rα = ir1 , and the second relation of the system becomes: α1 + α2 + ... + αN ≤ ΛS / r1 1
N
1
2
N
i
If we pose α1 + α2 + ... + αN = h , we have to find all the combination of events satisfying the relation h ≤ ΛS / r1 . For this reason, we can say that: P( Λ ≤ ΛS ) = where: ah =
ΛS / r1
∑ ah
(3)
h= M
∑ p( α1 | α2 ) p( α2 |α3 )... p( αN −1 |αN ) p( αN )
α 1 +α 2 + ...+α N = h
(4)
A direct approach to calculate all the terms (4) is computationally very expensive, especially if the number of films and/or the histogram bins increases. To solve this problem, an iterative approach is needed.
II.1. The iterative method
Let us rewrite the (4) in the following way: ah =
K
∑ p( αN )
α N =1
∑
n
∏ p( αi | αi +1 )
α 1 +α 2 + ...+α n = h1 i =1
(5)
where we have called: h1 = h − αN n = N −1 Now we make the following assumption: g n ( h1 ,αn +1 ) =
∑
n
∏ p( αi | αi+1 )
α 1 +α 2 + ...+α n = h1 i = 1
(6)
For n = N − 1 , the (6) is equivalent to the second part of the (5). Now let us consider the case in which n = 1 . We have: g 1 ( h1 ,α2 ) = ∑ p( α1 | α2 ) = p( h1 | α2 ) (7) α 1 = h1
since the parameter h1 assumes only one value, given by α1 . Furthermore, α1 and α2 can assume all the values from 1 to K and so 1 ≤ h1 ≤ K . It is very easy to understand that g1 ( h1 ,α2 ) is a KxK matrix and coincides with the matrix of conditional probabilities, as it can be noted from the (7). If now we rewrite the (6) for n = 2 , we find: g 2 ( h1 ,α3 ) =
2
∑ ∏ p( αi | αi+1 ) = ∑ p( α1 | α2 ) p( α2 | α3 )
α 1 + α 2 = h1 i = 1
α 1 +α 2 = h1
If we look at the condition under the symbol of summation notation, α1 + α2 = h1 implies that α1 = h1 − α2 . So we can write:
g 2 ( h1 ,α3 ) =
K
∑ p( h1 − α2 | α2 ) p( α2 | α3 )
α2 =1
But we know that p( h1 − α2 | α2 ) = g1 ( h1 − α2 ,α2 ) and so: g 2 ( h1 ,α3 ) =
K
∑ g1 ( h1 − α2 ,α2 ) p( α2 | α3 )
α2 =1
In this case, the parameter h1 will assume all the values between 2 and 2K. Furthermore, because of the limitations on the function g1 ( h1 ,α2 ) , it will result that 1 ≤ h1 − α2 ≤ K . This implies that α2 will assume all the values between 1 and the minimum between K and h1 − 1 . From this derives that: g 2 ( h1 ,α3 ) =
min (K ,h1 −1 )
∑ g 1 ( h1 − α2 ,α2 ) p( α2 | α3 )
(8)
α 2 =1
In this case, g 2 ( h1 ,α3 ) is a (2K)xK matrix with h1 as row index and α3 as column index. We can esily find all the values of this matrix because the terms g1 ( h1 − α2 ,α2 ) and p( α2 | α3 ) are known for each value of the parameters h1 , α2 and α3 . Let us now pass to examine g 3 ( h1 ,α4 ) : g 3 ( h1 ,α4 ) =
3
K
∑ ∏ p( αi | αi+1 ) = ∑ ∑ p( α1 | α2 ) p( α2 | α3 ) p( α3 | α4 )
with the further condition that 2 ≤ h1 − α3 ≤ 2 K , and consequently: g3 ( h1 ,α4 ) =
α 1 + α 2 + α 3 = h1 i =1
α 3 = 1 α 1 + α 2 = h1 −α 3
min (K ,h1 − 2 )
∑ g 2 ( h1 − α3 ,α3 )p( α3 | α4 )
α 3 =1
In this case, g3 ( h1 ,α4 ) will be a (3K)xK matrix and the parameter h1 assumes all the integer values between 3 and 3K. Proceeding in the same way for all the other functions g n ( h1 ,αn+1 ) , we can derive the following general expression: g n ( h1 ,αn+1 ) =
min (K ,h1 − n + 1)
∑ g n−1 ( h1 − αn ,αn )p( αn | αn+1 ) ,
α n =1
2 ≤ n ≤ N −1
(9)
and remembering that for n = 1 the (7) is valid. Finally, the computation of the terms ah is made in the following way: ah =
min ( K ,h1 − N +1 )
∑ g N −1 ( h1 − αN ,αN )p( αN ) ,
α N =1
N ≤ h ≤ NK
(10)
In conclusion, this iterative method is based on the initial knowledge of the matrix of the conditional probabilities p( α i | α j ) , with i , j = 1,2 ,...K , and goes on building, for each step corresponding to a defined value of n , a (nK)xK matrix called g n ( h1 ,αn +1 ) , obtained multiplying, in a way defined by the (9), the elements of the matrix g n−1 ( h1 ,αn ) , obtained in the previous step, with the elements of the matrix of the conditional probabilities p( α i | α j ) . When the step N − 1 is finished, each term ah is calculated multiplying the elements of the matrix g N −1 ( h1 ,αN ) for the marginal probabilities p( αN ) . Each term ah represents the probability for the aggregation of the video streams to occupy a bandwidth of hr1 , if r1 is the width of the single histogram bin, so we have to sum all the terms ah until their sum reaches the value 1 − pl , where pl is the loss probability and is a QoS specific. Now let us suppose that in a generic time instant there are N video streams in the network. What happens if there is another video request? The algorithm has to calculate again all the terms ah .
Anyway, since until the step N − 1 the procedure to find the matrices g n ( h1 ,αn +1 ) is exactly the same, it is sufficient, to calculate again the terms ah , keep track of the matrix g N −1 ( h1 ,αN ) . Then we have to calculate the matrix g N ( h1 ,αN +1 ) as follows: g N ( h1 ,αN +1 ) =
min ( K ,h1 − N +1 )
∑ g N −1 ( h1 − αN ,αN ) p( αN | αN +1 )
α N =1
and then derive the ah terms: ah =
min ( K ,h1 − N )
∑ g N ( h1 − αN +1 ,αN +1 )p( αN +1 )
αN + 1 =1
If there is another video request, we start from the matrix g N ( h1 ,αN +1 ) , then calculate the matrix g N +1 ( h1 ,αN +2 ) through the matrix of conditional probabilities and finally derive the terms ah using again the (10). III. Bandwidth evaluation for heterogeneous streams In the previous analysis, we have supposed to have only one type of film in the network considering a certain amount of correlation between the video streams. Let us pass now to analyze the case in which in the network there are multiple types of film. To consider a kind of temporal correlation between the video streams, we should consider not only the correlation between streams of the same type, but also the correlation between different types of streams. This would result in a too high complexity. For this reason, we use the hypothesis that the correlation between streams of the same type is dominant if compared with the correlation between different types of streams. To fix the ideas, we suppose to have M types of film in the network and that there are nm streams of type m, for each 1 ≤ m ≤ M . We also suppose that the only type of correlation is between two video streams of the same type, as previously explained. So, like the homogeneous case, we can define for each type of film the matrix of conditional probabilities making several simulations for each type of film and deriving the conditional probabilities. We finally obtain the M matrices of conditional probabilities: pm ( α i | α j ) , i , j = 1,2 ,...K ; m = 1,2 ,...M Since the only correlation is between films of the same type, we can say that films of different types are present in an independent way in the network. We remember that for a single type of film, the term ah represents the probability to occupy a bandwidth of hr1 . If we repeat the same concept for all the types of film, we can define, for each type of film, the terms a ( m ) h , that represent the m
probability that the m-th type of film occupies a bandwidth of hm r1m , where r1m is the width of a histogram bin relative to the m-th type of film. Since the only correlation we suppose is between films of the same type, it results that films of a different type will be independent, so we can make the product of the relative probabilities of aggregate bandwidth. If we suppose that r1m is the same for each type of film, the probability to have a total aggregate bandwidth of hr1 will be given by the sum of all the combinations of product of probabilities that give us a total bandwidth of hr1 ; in other words: ah =
∑
M
∏ ah
h 1+ h 2 + ...+ hM = h m= 1
(m )
(11)
m
In this case, the correlation between the video streams is taken into account in each of the terms a ( m ) h , like the homogeneous case. m
To find all the terms ah using the (11) we have first to find the terms a ( m ) h , for each m and for each hm , using exactly the same procedure illustrated in the homogeneous case. Then, we have to m
solve the (11). A direct approach in this case is computationally very expensive, so an iterative method to solve the (11) has to be found. So let us use the (11) supposing that there are only two types of film in the network (M=2). Therefore the index m in the (11) will assume all the values between 1 and 2. Particularizing the (11) for M=2, we obtain that: ah ( 2 ) =
2
∑ ∏ a(m)h
m
h 1+ h 2 = h m = 1
where the index of the term a indicates the number of the types of film involved in the computation. We can rewrite the last formula as follows: h
ah ( 2 ) =
Kn 2
∑ ah ( 2 ) ∑ ah ( 1 ) 2
h 2=n 2
h 1= h −h 2
1
and finally: ah ( 2 ) =
∑ ah ( 2 ) (ah −h ( 1 ) ) Kn2
(12)
2
2
h 2=n 2
We can observe that the (12) is valid for each value of h and so we have to use the (12) for each h. Now if we imagine to calculate the (12) for a fixed value of h, for each value of h given by the initial summation, the function ah ( 1 ) can assume only one value, corresponding to the unique one given in correspondence of the index h1 = h − h2 . This explains the final structure of the (12). If now we call: 2
1
f h ( h1 ,h2 ) =
∑ ah ( 2 ) (ah − h ( 1 ) ) Kn 2
2
h 2= n 2
2
we note that f h ( h1 ,h2 ) is a vector obtained calculating the (12) for each h from N to KN. For this reason this vector has length (K-1)N. Now let us introduce the third type of film. We can consider again the (11) and apply it for three types of film, obtaining: ah ( 3 ) =
3
∑ ∏ ah
h 1+ h 2 + h 3 = h m =1
( m) m
It is very easy to note that: ah ( 3 ) =
Kn 3
∑ ah
h 3=n 3
( 3)
3
2 ∑ ∏ ah ( m ) m 2 ∑ hm =h −h 3 m = 1 m=1
and calling: f h ( h1 ,h2 ,h3 ) =
∑ ah ( 3 ) ( f 2 ,h −h ( h1 ,h2 ))
Kn 3
3
h 3=n 3
3
we can evaluate all the elements of the vector fh( h1 ,h ,h3 ) for each value of h simply exploiting the elements of the vector f h ( h1 ,h2 ) previously calculated using the (12). In the same way we can calculate all the elements of the vector fh( h1 ,h2 ,h3 ,h4 ) as follows: 2
4
fh( h1 ,h2 ,h3 ,h4 ) = ah ( 4 ) = 4
∑ ∏ ah
∑ hm =h
m =1
(m) m
m=1
The last formula is equivalent to the following:
fh( h1 ,..., h4 ) =
Kn4
∑ ah
h 4 =n 4
Consequently:
4
(4)
3 ∑ ∏ gh ( m ) m 3 ∑ hm= h −h 4 m = 1 m =1
fh( h1 ,..., h4 ) =
Kn4
∑ ah ( 4 ) ( fh − h ( h1 ,h2 ,h3 )) 4
h 4 =n 4
4
Proceeding in the same way and introducing all the other types of film, until the last one (the M-th type of film), we can obtain the same results we previously found. In conclusion, the last result can be generalized as follows: fh( h1 ,..., hm ) =
Knm
∑ ah
hm = nm
(m )
( fh − h ( h1 ,..., hm −1 ))
(13)
m
m
The (13) is the requested iterative method used to find the terms ah .Furthermore: ∀N ≤ h ≤ KN (14) ah = fh( h 1,h 2 ,...,h M ) If, given the status of the network (i.e., there are nm films of type m, for each m from 1 to M) there is another request of a film of type m, we have to keep in memory the matrix given by g ( m ) N −1 ( h1 ,α N ) relative to the m-th type of film, like the homogeneous case. Then we find the matrix g ( m ) N ( h1 ,α N +1 ) through the matrix of the conditional probabilities and finally we find the new terms a ( m ) h . All the other terms a ( i ) h with i ≠ m will remain unchanged. Then we will find the terms ah through the (14). Since this procedure is valid for each m, it is clear that we have to m
i
keep in memory M different matrices (given by all the g ( m ) N −1 ( h1 ,α N ) for each m) and, for each new request, we have to calculate again the terms ah through the (14).
IV. Numerical results In this section the algorithm developed in the previous section will be applied to specific cases in order to evaluate the aggregate bandwidth needed by a given number of streams. In particular, we suppose to have only one type of video stream in the network, represented by 40000 frames of the film Star Wars, smoothed using a transmission buffer of 1 MB. We have divided the total bit rate range of the stream into K=5 and K=10 equal bins. The results of the algorithm developed under the assumption of correlation between video streams is compared with the results of the same algorithm, but used under the hypothesis of total independence of all the video frames of all the video streams, and with the simulation results. In the table above, there are all the values of the aggregate bandwidth obtained in the three cases, for a different number of video streams. The marginal distribution of the video stream found applying the histogram method in the case of total independence of the video frames are the following (we suppose K=5): p1 = 0 p2 = 0.734475 p3 = 0.2391 p4 = 0.0018 p5 = 0.024625 The size of each histogram bin is ∆ = 147552 bit/s . As an example, we report also the matrix of the conditional probabilities, derived in the hypothesis that the video streams start all into a time interval of 2 minutes and considering K=5 histogram bins: 1 0 0 0 0
0 0.960647 0.002643 0.000551 0.0012366
0 0.079645 0.879114 0.003645 0.037596
0 0.285278 0.497361 0.018472 0.198889
0 0.379579 0.348919 0.019908 0.251594
We can also note that, since these matrices are derived empirically considering the time interval in which the video streams start, we will derive a matrix for each different time interval in which are included all the starting points of all the video streams. If we suppose K=10 equal bins in the histogram method, we can derive the following probabilities in the case of independence of all the video frames: p1 = 0 p2 = 0 p3 = 0.179625 p4 = 0.55485 p5 = 0.14125 p6 = 0.096975 p7 = 0.0012 p8 = 0.0006 p9 = 0.0006 p10 = 0.024025 The size of each histogram bin is ∆ = 73776 bit/s . Also in this case, we can compute all the matrices of the conditional probabilities that will be different depending on the different time intervals in which the video streams start. All the previous data are exploited to evaluate the aggregate bandwidth. The results obtained under the hypothesis of independence are compared with the ones obtained exploiting the matrix of the conditional probabilities as input data. All these results are then compared with the simulations. The figures 3a, 3b and 3c represent what previously explained. In each figure have been considered different time intervals in which all the video streams start. 45 40 35 30 25 20 15 10 Simulations Conditional probabilities K=5 Conditional probabilities K=10 Independent probabilities K=5 Independent probabilities K=10
5 0 10
20
30
40
50
60
70
80
90
100
Number of streams
Figure 3a. Aggregate bandwidth for 40000 frames of Star Wars. The starting points of all the streams fall in a time interval of 40000 frames.
60
50
40
30
20
Simulations
10
Conditional probabilities K=5 Conditional probabilities K=10 Independent probabilities K=5 Independent probabilities K=10
0 10
20
30
40
50
60
70
80
90
100
Number of streams
Figure 3a. Aggregate bandwidth for 40000 frames of Star Wars. The starting points of all the streams fall in a time interval of 3600 frames. 70
60
50
40
30
20 Simulations Conditional probabilities K=5
10
Conditional probabilities K=10 Independent probabilities K=5 Independent probabilities K=10
0 10
20
30
40
50
60
70
80
90
100
Number of streams
Figure 3a. Aggregate bandwidth for 40000 frames of Star Wars. The starting points of all the streams fall in a time interval of 1800 frames.
As can be observed from the previous figures, if the starting points of all the video streams fall into a time interval of 40000 frames (that is the entire duration of the video stream), the results obtained considering the conditional probabilities and the independent probabilities as input data are very close, but they both overestimate the simulation results. If we introduce a certain amount of correlation, given by a shorter time interval in which all the video streams start (see figures 3b and 3c), the algorithm results underestimate the simulation results (in both cases), but in the case of the utilization of the conditional probabilities they are closer to the simulations. This implies that the introduction of an amount of correlation between the video streams as input data for the algorithm is an advantage to better approximate the simulation results, even if the algorithm still underestimates the simulation results, and this is a relevant disadvantage. V. CONCLUSIONS The algorithm developed in this work takes into account the important aspect of the estimation of the aggregate bandwidth of a certain number of video streams under some conditions of temporal
correlation between the video flows. From the results obtained it is clear that the algorithm better approximates the simulations, showing the importance of the introduction of the correlation between video streams, even if only one type of film has been considered in the numerical results shown in the previous section. Anyway, in the paper only the correlation between two video streams of the same type has been considered; to better describe the system, it should be taken into account also the correlation between more than two types of film, and not necessarily of the same type. This will be presumably the work that will be developed in the future. References [1] Andrew s. Tanenbaum, Computer Networks, Prentice-Hall, Third Edition 1996. [2] James D. Salehi, Zhi-Li Zhang, Jim Kurose, Don Towsley, “Supporting Stored Video: Reducing Rate Variability and End-to-End Resource Requirements Through Optimal Smoothing”, IEEE/ACM Transactions On Networking, vol. 6, No 4, pp. 397-410, August 1998. [3] Zhi-Li Zhang, Jim Kurose, James D. Salehi, Don Towsley, “Smoothing, Statistical Multiplexing, and Call Admission Control for Stored Video”, IEEE Journal on Selected Areas in Communications, vol. 15, No 6, pp. 1148-1166, August 1997. [4]
A. Papoulis, Probability, random variables and stochastic processes, Mc Graw Hill, 1991.
[5] D. Striccoli , 2000. “Stima della banda aggregata per sistemi di distribuzione video”, Dr. Eng. Degree Thesis, Polytechnic of Bari, Italy, (in Italian).