Repository. PAUG. 1989-08-29(11:25) 2620 LAN(100BaseT). 2.6 ita.ee.lbl.gov. LBL-TCP-3 1994-01-20(14:10) 7200 WAN(100BaseT). 4 ita.ee.lbl.gov. AUCK-IV.
Non Gaussian and Long Memory Statistical Modeling of Internet Traffic. A. Scherrer1 , N. Larrieu2 , P. Borgnat3 , P. Owezarski2 , P. Abry3 1
3
LIP (UMR CNRS), ENS Lyon, France 2 LAAS-CNRS, Toulouse, France Physics Lab. (UMR CNRS), ENS Lyon, France
Abstract. Due to the variety of services and applications available on today’s Internet, many properties of the traffic stray from the classical characteristics (Gaussianity and short memory) of standard models. The goal of the present contribution is to propose a statistical model able to acccount both for the non Gaussian and long memory properties of aggregated count processes. First, we introduce the model and a procedure to estimate the corresponding parameters. Second, using a large set of data taken from public reference repositories (Bellcore, LBL, Auckland, UNC, CAIDA) and collected by ourselves, we show that this stochastic process is relevant for Internet traffic modeling for a wide range of aggregation levels. In conclusion we indicate how this modeling could be used in IDS design. Key Words: Statistical traffic modeling, Non Gaussianity, Long Memory, Aggregation Level.
1
Motivation: Internet traffic times series modeling
The Internet traffic grows in complexity as the Internet becomes the universal communication network and conveys all kinds of information: from binary data to real time video or interactive information. Evolving toward a multi-service network, its heterogeneity increases in terms of technologies, provisioning, and applications. Indeed, all new applications with various and changing requirements, generate traffic properties that deviates from those accounted for by simple models. Moreover, the various networking functions required for providing differentiated and guaranteed services - as congestion control, traffic engineering, buffer management, etc. - generally involve specific time scales. Also, the natural variability of the Internet traffic is high: it varies from one link to another and depends on the time of day. Therefore, a versatile model for Internet traffic able to capture its characteristics regardless of time, place, and aggregation level or time-scales, is a step forward for monitoring and improving the Internet. Such model can serve for designing new suited network mechanisms able to cope with the inherent complexity, for traffic management, for charging, for injecting realistic traffic in simulators, for Intrusion Detection Systems,... • Traffic Modeling : A Brief Overview. Packets arrival processes are natural fine-grain models of computer network traffic. They have long been recognized
as not being Poisson (or renewal) processes, insofar as the inter-arrival delays are not independent [1]. Therefore, non-stationary Point processes [2] or stationary Markov Modulated Point processes [3, 4] have been proposed as models. However, the use of this fine granularity of modeling implies to take into account the large number of packets involved in any computer network traffic, hence huge data sets to manipulate. A coarser description of the traffic is conveniently used: one studies packet or byte aggregated count processes. They are defined as the number of packets (resp., bytes) that lives within the k-th window of size ∆ > 0, i.e., whose time stamps lie between k∆ ≤ tl < (k + 1)∆; it will noted X∆ (k). In the present work, we concentrate on this aggregated packet level. More detailed reviews on traffic models can be found in, e.g., [5, 6]. Our objective is the modeling of X∆ (k). For this, one mostly uses stationary processes. Marginal distributions and auto-covariance functions are two major characteristics that affect network performance and hence need to be simultaneously modeled. • Marginal Distributions. Due to the point process nature of the underlying traffic, Poisson or exponential distributions could be expected at small aggregation levels ∆; but this fails at larger aggregation levels. As X∆ (k) is by definition a positive random variable, other works proposed to describe its marginal with common positive laws such as log-normal, Weibull or gamma distributions [5]. For highly aggregated data, Gaussian distributions are used in many cases as relevant approximations. However none of them can satisfactorily model traffic marginals both at small and large ∆s. As it will be argued in the current paper, empirical studies suggest that a Gamma distribution Γα,β capture best the marginals of the X∆ for a wide range of scales, providing a unified description over a wide range scales of aggregation ∆. • Covariance Functions and higher order statistics. In Internet monitoring projects, it is observed that traffic under normal conditions presents per se or naturally large fluctuations and variations in its throughput at all scales. This is often described in terms of long memory [7], self-similarity [6, 8], multifractality [9] that impacts the second (or higher order) statistics. In computer network traffic, long memory or long range dependence (LRD) property (cf. [10]) is an important feature as it seems related to decreases of the QoS as well as of the performance of the network (see e.g., [11]). Modeling LRD precisely in Internet times series is thus necessary to perform accurate and relevant network design (buffer size,...) and performance predictions (delay as a function of utility,...). Let us recall that long range dependence is defined from the behaviour at the origin of the power spectral density fX∆ (ν) of the process: fX∆ (ν) ∼ C|ν|−γ , |ν| → 0, with 0 < γ < 1.
(1)
Note that Poisson or Markov processes, or their declinations, are not easily suited to incorporate long memory. They may manage to approximately model the LRD existing in a finite duration observation at the price of an increase of the number of adjustable parameters involved in the data description: a highly undesirable feature. Parsimony in describing data is indeed a much desired property as it
may be a necessary condition for a versatile robust meaningful real time and on the fly modeling. One can also incorporate long memory and short correlations directly into point processes using cluster point process models, yielding interesting description of the packet arrival processes as described in [12]. However this model does not seems adjustable enough to encompass a large range of aggregation window size. A popular alternative relies on canonical long range dependent processes such as fractional Brownian motion, fractional Gaussian noise [13] or Fractionally Integrated Auto-Regressive Moving Average (farima) (see [6] and the references therein). Due to the many different network mechanisms and various source characteristics, short term dependencies also exist (superimposed to LRD) and play a central role (cf. for instance [14]). This leads us to propose here the use of processes that have the same covariance as that of farima models [10], as they contain both short range and long range correlations. • Aggregation Level. A recurrent issue in traffic modeling lies in the choice of the relevant aggregation level ∆. This is an involved question whose answer mixes up the characteristics of the data themselves, the goal of the modeling as well as technical issues such as real time, buffer size, computational cost constraints. Facing this difficulty of choosing a priori ∆, it is of great interest to have at disposal a statistical model that may be relevant for a large range of values of ∆, a feature we seek in the model proposed below. • Goals and outline of the paper. This paper then focuses on the joint modeling of the marginal distribution and the covariance structure of Internet traffic time series, in order to capture both their non Gaussian and long memory features. For this purpose, we propose to use a non Gaussian long memory process whose marginal distribution is a gamma law, Γα,β , and whose correlation structure is the one of a farima process, the farima(φ, d, θ) (Section 2). Through the application of this model to several network traffic traces of reference, we show how this 5 parameters model, α, β, d, φ, θ efficiently captures their statistical first and second orders characteristics. We also observe that this modeling is valid for a wide range of aggregation levels ∆ (Section 3).
2
2.1
Non Gaussian Long Range Dependent stochastic Modeling: the Gamma farima model Definitions and properties of the Gamma farima model
Given an aggregation level ∆, the notation {X∆ (k), k ∈ Z} denotes the aggregated count of packets, obtained for instance from the inter-arrival time series. The process X∆ is assumed to be 2nd order stationary. To model it, we use the use the following non Gaussian, long range dependent, stationary model: the Gamma (marginal) farima (covariance) process. • First order Statistics (Marginals): Gamma distribution. A Γα,β random variable (RV) is a positive random variable characterized by its shape α > 0
and scale β > 0 parameters. Its probability density function (pdf) is defined as: α−1 1 x x Γα,β (x) = exp − , (2) βΓ (α) β β where Γ (u) is the standard Gamma function (see e.g., [15]). Its mean and variance read respectively: µ = αβ and σ 2 = αβ 2 . Hence, for a fixed α, varying β amounts to vary proportionally µ and σ. For a fixed β, the pdf of a Γα,β random variable evolves from a highly skewed shape (α → 0) via a pure exponential (α = 1) towards a Gaussian law (α → +∞). Therefore, 1/α, can be read as an index of the distance between Γα,β and √ Gaussian laws. For instance, skewness and kurtosis read respectively as 2/ α and 3 + 6/α. The proposed model assumes that first order statistics of the data follow a Gamma distribution with parameters that are indexed by ∆. • Second order Statistics (covariance): Long Range Dependence. For sake of simplicty, in the present work, we restrict the general farima(P, d, Q) to the case where the two polynomials P and Q that describe the short-range correlations properties of the time-series (that could be of any arbitrary orders) are of order 1: the farima(θ, d, φ). The power spectral density of a farima(θ, d, φ) reads, for −1/2 < ν < 1/2: fX (ν) = σ2 |1 − e−i2πν |−2d
|1 − θe−i2πν |2 , |1 − φe−i2πν |2
(3)
where σ2 stands for a multiplicative constant and d for a fractional integration parameter (−1/2 < d < 1/2). The constant σ2 is chosen such that the integral 2 of the spectrum is equal to the variance given by the marginal law, σX = αβ 2 . The parameter d characterizes the strength of LRD: a straightforward expansion of Eq. 3 above at frequencies close to 0 shows that the farima(θ, d, φ) spectrum displays LRD as soon as d ∈ (0, 1/2), with long memory parameters (see Eq. 1) γ = 2d and C = σ2 . Parameters θ and φ enable to design the spectrum at high frequencies and hence to model short range correlations. • Gamma farima model. The Γα,β - farima(φ, d, θ) model involves 5 parameters, that need to be ajusted from the data, to describe it at each aggregation level ∆. As such, it is a parsimonious model that is nevertheless versatile enough to be confronted to data. As the process defined above is not Gaussian, the specifications of its first and second order statistical properties do not fully characterize it. Room for further design to adjust other properties of the traffic remains possible in the framework of the model. However, this is beyond the scope of the present contribution. Sample paths of this Γα,β - farima(φ, d, θ) model can be numerically generated using a circular embedded matrix based synthesis procedure fully described in [16]. 2.2
Analysis and Estimation
• Gamma parameter estimation. A classical maximum likelihood method is used for the estimation of α and β [17]. Using the method mentioned above for
Data PAUG LBL-TCP-3 AUCK-IV CAIDA UNC Metrosec-fc1
Date(Start Time) 1989-08-29(11:25) 1994-01-20(14:10) 2001-04-02(13:00) 2002-08-14(10:00) 2003-04-06(16:00) 2005-04-14(14:15)
T (s) 2620 7200 10800 600 3600 900
Network(Link) IAT (ms) LAN(100BaseT) 2.6 WAN(100BaseT) 4 WAN(OC3) 1.2 Backbone(OC48) 0.01 WAN(100BaseT) 0.8 LAN(100BaseT) 0.8
Repository ita.ee.lbl.gov ita.ee.lbl.gov wand.cs.waikato.ac.nz www.caida.org www-dirt.cs.unc.edu www.laas.fr/METROSEC
Table 1: Data Description. Description of the collection of traces. T is the duration is the trace, in second. IAT is the mean inter-arrival time, in ms.
synthesizing realizations of the model, numerical simulations have been done to check that this estimation procedure provides us with accurate estimates even when applied to processes with long range dependence [16]. Moreover, it behaves well, compared to the usual moment based technique, with βˆ = σˆ2 /ˆ µ, α ˆ=µ ˆ/βˆ ˆ 2 where µ ˆ and σ consist of the standard empirical mean and variance, whose quality are affected by the LRD. • Farima parameter estimation. Estimation of the 3 parameters (θ, d, φ) boils down to a joint measurement of long memory and short-time correlation. A considerable amount of works and attention (see e.g. [18] for a thorough and up-to-date review) has been devoted to the estimation of long memory, in itself a difficult task. Joint estimations of long and short range parameters are possible, for instance from a full maximum likelihood estimation based on the analytical form of the spectrum recalled in Eq. 3, but they are computationally heavy. We prefer to rely on a more practical two step estimation procedure: using standard wavelet-based methodology (not recalled here, the reader is referred to [19]) d is first estimated. Then the estimation of the residual ARMA parameters θ and φ is performed using least squares techniques.
3 3.1
Data analysis: Results and Discussions Data and Analysis
The model and analysis proposed in the present contribution are illustrated at work on a collection of standard Internet traces, described in Table 1, gathered from most of the major available Internet traces repositories (Bellcore, LBL, UNC, Auckland Univ, Univ North Carolina, CAIDA) as well as on time series collected by ourselves within the METROSEC research projects. This covers a significant variety of traffic, networks (Local Area Network, Wide Area Network,... and edge networks, core networks,...) and links, collected over the last 16 years (from 1989 to 2005). The Γα,β - farima(φ, d, θ) analysis procedures described above are applied to traffic time series X∆ for various levels of aggregation ∆, independently. Stationarity of X is assumed, for theoretical modeling. Hence, we first perform
an empirical time consistency check on the estimation results obtained from adjacent non overlapping sub-blocks, on X∆ , for each aggregation level. Then, we analyze only data sets for which stationarity is a reasonable hypothesis. This approach is very close in spirit to the ones developed in [20, 21]. Due to obvious space constraints, we choose to report results for the (somewhat old however massively studied) LBL-TCP-3 time series and for the (very recent and hence representative of today’s Internet, collected by ourselves) Metrosec-fc1 times series only. However, similar results were obtained for all the data listed in Table 1 and are available upon request. Preliminary results (including the recent and well-known Auck-IV) were also presented in [16, 22].
1.4
0.1
1.2
0.02
0.08
0.015
1 0.06
0.8
0.01 0.6
0.04
0.4
0 0
2
4
6
8
10
0 0
10
20
30
40
0 0
50
50
8
8
6
6
6
4
4
4
2
log2Sj
8
log2Sj
log2Sj
(a)
0.005
0.02
0.2
2 0
0
−2
−2
−2
1
5
10 j
15
1
5
10 j
150
200
250
2
0
0.2
100
15
1
5
10 j
15
−3
0.03
5
x 10
0.025 4
0.15 0.02
3
0.1
0.015 2
0.01 0.05
1
0.005 10
20
30
40
0 0
50
100
150
200
0 0
500
14
14
12
12
12
10
10
10
8
8
8
6
log2Sj
14
log2Sj
log2Sj
(b)
0 0
6 4
4
2
2
2
0 1
0 1
j
10
15
5
j
10
15
1500
6
4
5
1000
0 1
5
j
10
15
Fig. 1: (a) LBL-TCP-3 (b) Metrosec-fc1. Γα,β - farima(φ, d, θ) fits for the marginals (top row) and covariances (bottom) for ∆ = 4, 32 and 256 ms (left to right). j = 1 corresponds to 1 ms. See comments in text.
3.2
Marginals
Fig. 1 (first and third rows) consists of the model fits superimposed to empirical probability functions. It illustrates clearly the relevance of the Γα∆ ,β∆ fits of the marginal statistics of X∆ . Also, it shows that this adequacy holds over a wide range of aggregation levels ∆ ranging from 1ms up to 10s. Let us again put the emphasis on the fact that this is valid for various traffic conditions. The relevance of these fits has been characterized by means of χ2 and Kolmogorov-Smirnow goodness-of-fit tests (not reported here). For some of the analyzed time series or some aggregation levels, exponential, log-normal or χ2 laws may better adjust the data. However, Gamma distributions are never significantly outperformed and they are undoubtedly the distributions that remain the most significant when varying ∆ from very fine to very large.
12 10
1
0.5
α β
0.8
0.4
0.6
d
8 6
0.3 0.4
0.2
0.2
4
0
0.1
2
−0.2
0 0
50
100
150 ∆
200
250
300
0 0
2
4 log2(∆)
6
8
0
θ φ 2
4 log (∆)
6
8
2
Fig. 2: LBL-TCP-3. Estimated parameters of Γα,β - farima(φ, d, θ). Left: α ˆ ∆ and βˆ∆ (∆ in ms); middle and right: d then θˆ and φˆ as a function of log2 ∆.
The adequacy of the Γα,β class of distributions can be related to its stability under addition and multiplication properties. First, let X be a Γα,β RV, let λ ∈ R∗+ , then λX is a Γα,λβ RV. Traffic wise, this can be read as the fact that a global increase of traffic (its mean µ and standard deviation σ are both and jointly multiplied by a positive factor λ) is seen via the modification of the the scale parameter β, while, conversely, it does not modify the shape parameter α. Therefore, a (multiplicative) increase of traffic volumes can be very well accounted for within the Γα,β family and does not imply that its marginals are closer to Gaussian laws. Second, let Xi , i = 1, 2 be two independent Γαi ,β random variables, then their sum X = X1 + X2 follows a Γα1 +α2 ,β law. For traffic analysis, this is reminiscent of the aggregation procedure since one obviously has X2∆ (k) = X∆ (2k) + X∆ (2k + 1). Therefore, assuming that X∆ is a Γα∆ ,β∆ process, independence among its samples would imply that α∆ = α0 ∆ and β∆ = β0 (where α0 = N/T and β0 denote reference constants, with N the total number of packets (or bytes) seen within the observation duration T ). Correlations in X∆ will be underlined by departures of the functions α∆ = f (∆) and β∆ = g(∆) from these independence respectively linear and constant behaviors. However, correlations do not change the fact that X∆ is a Γα∆ ,β∆ process for a wide range of aggregation levels ranging from very fine to very large. This covariance of the
Γα∆ ,β∆ model under a change of aggregation level ∆ is therefore an argument very much in favor of its use to model Internet traffic. Fig. 2 (left plot) shows the evolution of the estimated α ˆ and βˆ as a function of ∆ for the LBL-TCP-3 trace (analog features are observed for the Metrosecfc1 time series). Practically, in our analysis α ˆ and βˆ are tied together via the relation α ˆ βˆ = N ∆/T . Obviously, α does not follow a linear increase with ∆ and βˆ is not constant. Hence, significant departures from the independence behaviors are observed. The increase of α ˆ betrays the fact that the probability density functions evolve towards Gaussian laws; we conclude that the model captures the fact the convergence to Gaussianity. However, Fig. 2 shows that this evolution to Gaussianity goes far more slowly than it would in the case of a pure Poisson arrival process even when ∆ is lower than 1s. Note that ∆ ' 1 s corresponds to the onset of long memory (as will be discussed below). The functions α ˆ ∆ and ˆ β∆ shown here accommodate then mainly for short range dependencies. 3.3
Correlation Structure
Fig. 1 (second and fourth rows) compare the empirical logscale diagrams (LD) computed from the analyzed time series to logscale diagrams derived numerically from the combination of Eq. 3 with the estimated values of dˆW , θˆ and φˆ 4 . Increasing the aggregation level ∆ is equivalent to filtering out the fine scale details while leaving the coarse scales unchanged. It is thus expected that the logscale diagram obtained for a given ∆ corresponds to a truncated version of the diagrams obtained for smaller ∆s, as can be seen in Fig. 1. Fig. 1 (second and fourth rows) shows that the coarse-scale part of the LDs is dominated by a long memory (for scales 2j = 210 ' 1s). Fig. 2 (central plot) ˆ characterizing the strength of the indicates that the wavelet-based estimated ds long memory remain constant for all ∆, as expected from the theoretical analysis reported above. This is a clear evidence that long range dependence is a longtime feature of the traffic that has no inner time-scale. Conversely, short-time correlations have naturally short characteristic time-scales so that the filtering due to an increase of ∆ smooths them out. Indeed, in Fig. 2 (right plot), one sees that φˆ and θˆ decrease towards zero for ∆ ≥ 200 ms. The model is then dominated by the long memory, and close to a (non Gaussian) farima(0, d, 0).
4
Conclusion and Future Researches
In the present work, we have shown that the proposed Γα,β - farima(φ, d, θ) model reproduces the marginals and both the short range and long range correlations of Internet traffic time series. This statement holds for a large variety of different traffic collected on different links and networks, including traffic collected by ourselves. The central results is that the proposed model is able to fit data aggregated over a wide range of aggregation levels ∆. The increase of 4
Procedure developed with D. Veitch, cf. [23]
∆ is accounted for via the evolutions of the five parameters with ∆ enabling a smooth and continuous transition from pure exponential to Gaussian laws. This provides us with an efficient practical procedure to address the recurrent question occurring in traffic modeling related to the choice of the relevant aggregation level ∆. As choosing ∆ a priori may be uneasy, using a process that offers an evolutive modeling with ∆ is of high interest. Another central point lies in the fact that the (absolute) values of the parameters of the models obviously are likely to vary from one time series to another. In our model, the main information is contained in the evolution of the parameters with ∆, not in their absolute values. Also, the proposed model is very parsimonious has it is fully described via 5 parameters. When necessary, the short range correlations can be further designed by straightforwardly extending Γα,β - farima(φ, d, θ) model to a Γα,β - farima(P, d, Q) one, where P and Q are polynomial of arbitrary orders. This contribution opens the track for two major research directions, under current explorations. First, a numerical synthesis exists that enables to produce numerically sample paths of stochastic processes having Γα,β marginals and farima(φ, d, θ) covariances [16]. With this numerical procedure, it is possible to simulate realistic background traffic that can be used to feed network simulators and hence study network performance and quality of services alterations under various simulated circumstances. Second, the monitoring with respect to time of the estimated parameters of this model may constitute a central piece in traffic anomaly detection. A change in the evolutions with ∆ of one (or more) paramater(s), rather than a change of the values themselves, can be detected and used as an alarm to indicate the occurrence of an anomaly. This signature can be further used to classify anomalies into legitimate ones (flashcrowd...), illegitimate but non aggressive ones (failures...) and illegitimate and aggressive ones (Attacks...). For this latter case, it can be used to start a defense procedure. This may serve as a starting point for an Intrusion Detection System design. Acknowledgments. The authors acknowledge the help of ENSLyon CRI group (H. Brunet, J.-L. Moisy, B. Louis-Lucas). They gratefully thank colleagues from the major Internet traces repositories for making their data available to us: S. Marron and F. Hernandez-Campos and C. Park (UNC, USA), and D. Veitch and N. Hohn (CubinLab, University of Melbourne, Australia). With the financial support of the French MNRT ACI S´ecurit´e et Informatique 2004 grant.
References 1. Paxon, V., Floyd, S.: Wide-area traffic: The failure of Poisson modeling. ACM/IEEE transactions on Networking 3(3) (1995) 226–244 2. Karagiannis, T., Molle, M., Faloutsos, M., Broido, A.: A non stationary Poisson view of the internet traffic. In: INFOCOM. (2004) 3. Andersen, A., Nielsen, B.: A Markovian approach for modelling packet traffic with long range dependence. IEEE journal on Selected Areas in Communications 5(16) (1998) 719–732
4. Paulo, S., Rui, V., Ant´ onio, P.: Multiscale fitting procedure using Markov Modulated Poisson Processes. Telecommunication Systems 23 (1/2) (2003) 123–148 5. Melamed, B.: An overview of TES processes and modeling methodology. In: Performance/SIGMETRICS Tutorials. (1993) 359–393 6. Park, K., Willinger, W.: Self-similar network traffic: An overview. In Park, K., Willinger, W., eds.: Self-Similar Network Traffic and Performance Evaluation. Wiley (Interscience Division) (2000) 1–38 7. Erramilli, A., Narayan, O., Willinger, W.: Experimental queueing analysis with long-range dependent packet traffic. ACM/IEEE transactions on Networking 4(2) (1996) 209–223 8. Leland, W.E., Taqqu, M.S., Willinger, W., Wilson, D.V.: On the self-similar nature of ethernet traffic (extended version). ACM/IEEE transactions on Networking 2(1) (1994) 1–15 9. Feldmann, A., Gilbert, A., Willinger, W.: Data networks as cascades: Investigating the multifractal nature of internet WAN traffic. In: ACM/SIGCOMM conference on Applications, technologies, architectures, and protocols for computer communication. (1998) 10. Beran, J.: Statistics for Long-memory processes. Chapman & Hall, New York (1994) 11. Tsybakov, B., Georganas, N.: Self similar processes in communications networks. IEEE Trans. on Info. Theory 44(5) (1998) 1713–1725 12. Hohn, N., Veitch, D., Abry, P.: Cluster processes, a natural language for network traffic. IEEE Transactions on Signal Processing Special Issue on Signal Processing in Networking 8(51) (2003) 2229–2244 13. Norros, I.: On the use of fractional Brownian motion in the theory of connectionless networks. IEEE journal on Selected Areas in Communications 13(6) (1995) 14. Huang, C., Devetsikiotis, M., Lambadaris, I., Kaye, A.: Modeling and simulation of self-similar Variable Bit Rate compressed video: A unified approach. In: ACM SIGCOMM, Cambridge, UK (1995) 15. Evans, M., Hastings, N., Peacock, B.: Statistical Distributions. Wiley (Interscience Division) (2000) 16. Scherrer, A., Abry, P.: Marginales non gaussiennes et longue m´emoire : analyse et synth`ese de trafic Internet. In: Colloque GRETSI-2005, Louvain-la-Neuve, Belgique (2005) 17. Hahn, G., Shapiro, S. In: Statistical Models in Engineering. Wiley (Interscience Division) (1994) 88 18. Doukhan, P., Oppenheim, G., Taqqu, M.: Long-Range Dependence: Theory and Applications. Birkh¨ auser, Boston (2003) 19. Abry, P., Veitch, D.: Wavelet analysis of long-range dependent traffic. IEEE Trans. on Info. Theory 44(1) (1998) 2–15 20. Uhlig, S., Bonaventure, O., Rapier, C.: 3D-LD: a graphical wavelet-based method for analyzing scaling processes. In: ITC Specialist Seminar, W¨ urzburg, Germany (2003) 329–336 21. Veitch, D., Abry, P.: A statistical test for the time constancy of scaling exponents. IEEE Transactions on Signal Processing 49(10) (2001) 2325–2334 22. Borgnat, P., Larrieu, N., Abry, P., Owezarksi, P.: D´etection d’attaques de “d´eni de services” : ruptures dans les statistiques du trafic. In: Colloque GRETSI-2005, Louvain-la-Neuve, Belgique (2005) 23. Veitch, D., Abry, P., Taqqu, M.S.: On the automatic selection of the onset of scaling. Fractals 11(4) (2003) 377–390