Fractal characterization of complexity in temporal ... - CiteSeerX

2 downloads 1897 Views 752KB Size Report
Jan 28, 2002 - unsuitable to grab the essence of a design or to characterize its complexity. ...... a α: a parameter of DFA method, β: spectral index of the PSD method, HfBm: the Hurst ...... When the layout of DNA sequences is converted into a ...
INSTITUTE OF PHYSICS PUBLISHING

PHYSIOLOGICAL MEASUREMENT

Physiol. Meas. 23 (2002) R1–R38

PII: S0967-3334(02)18520-0

TOPICAL REVIEW

Fractal characterization of complexity in temporal physiological signals A Eke, P Herman, L Kocsis and L R Kozak Institute of Human Physiology and Clinical Experimental Research, Semmelweis University, Faculty of Medicine, PO Box 448, 1446 Budapest, Hungary E-mail: [email protected]

Received 27 July 2001 Published 28 January 2002 Online at stacks.iop.org/PM/23/R1 Abstract This review first gives an overview on the concept of fractal geometry with definitions and explanations of the most fundamental properties of fractal structures and processes like self-similarity, power law scaling relationship, scale invariance, scaling range and fractal dimensions. Having laid down the grounds of the basics in terminology and mathematical formalism, the authors systematically introduce the concept and methods of monofractal time series analysis. They argue that fractal time series analysis cannot be done in a conscious, reliable manner without having a model capable of capturing the essential features of physiological signals with regard to their fractal analysis. They advocate the use of a simple, yet adequate, dichotomous model of fractional Gaussian noise (fGn) and fractional Brownian motion (fBm). They demonstrate the importance of incorporating a step of signal classification according to the fGn/fBm model prior to fractal analysis by showing that missing out on signal class can result in completely meaningless fractal estimates. Limitation and precision of various fractal tools are thoroughly described and discussed using results of numerical experiments on ideal monofractal signals. Steps of a reliable fractal analysis are explained. Finally, the main applications of fractal time series analysis in biomedical research are reviewed and critically evaluated. Keywords: time series analysis, complexity, fractal, self-similarity, correlations, laser Doppler-flux, heart rate variability.

1. Introduction Living organisms are complex both in their structures and functions. Parameters of human physiological functions such as arterial blood pressure (Blaber et al 1996), breathing 0967-3334/02/010001+38$30.00

© 2002 IOP Publishing Ltd Printed in the UK

R1

R2

A Eke et al

(Dirksen et al 1998) and heart rate (Huikuri et al 2000), etc, are not stable but do fluctuate in time (Glass 2001). The actual pattern of these fluctuations results from an interaction between disturbances from the external or internal environments and actions on the part of control mechanisms attempting to maintain the equilibrium state of the system. The actual value of a parameter is measured by some form of a receptor whose signal is then compared by a control centre to an internal reference value, the set-point. The difference in the two, the error signal, determines the direction and magnitude of change that brings the actual value of the controlled parameter near the set-point. The homeostatic view of physiological control places emphasis on the constancy of the controlled parameter in spite of its evident fluctuations around the set-point. Temporal fluctuations, however, can also result from intrinsic sources, such as the activity of organism or ageing both affecting the set-point to varying extent. Hence the homeodynamic concept of physiological control seems more realistic (Goodwin 1997). Understanding the actual mechanisms involved is usually attempted in two ways: (i) the reductionistic approach identifies the elements of the system and attempts to determine the nature of their interactions; (ii) the holistic approach looks at detailed records of the variations of the controlled parameter(s) and seeks a consistent pattern indicative of the presence of a control scheme. These approaches are not mutually exclusive; they are indeed often used together. Both use mathematical (e.g. statistical) methods and lately mathematical models for rendering the findings conclusive or shaping and strengthening the hypotheses. Fluctuations in physiological systems are nonperiodic. Stochastic, chaotic and noisy chaotic models can mathematically treat these patterns. The stochastic (random) models assume that the fluctuations result from a large number of weak influences. The chaotic models conceptualize that strong nonlinear interactions between a few factors shape the fluctuations. A combination of these two into the noisy chaotic model is possible. Among the stochastic approaches, the fractal models give the best description of reality. In this review we concentrate on the fractal analysis of time series capturing the nonperiodic fluctuations of physiological systems. The classic theory of homeostasis focused on the set-point as determined by the mean of the physiological signal, the fluctuations around the mean were thus discarded as ‘noise’. Research of the last decades revealed that the homeodynamic and holistic concepts, such as fractal and nonlinear analyses could be very useful in understanding the complexity of physiological functions. They revealed that (i) physiological processes can operate far from equilibrium; (ii) their fluctuations exhibit a long-range correlation that extends across many time scales; (iii) and underlying dynamics can be highly nonlinear ‘driven by’ deterministic chaos. In our view the fractal and chaotic approaches are not mutually exclusive, because they present two ways of looking at physiological complexity (Bassingthwaighte et al 1994). The fractal approach is aimed at demonstrating the presence of scale-invariant self-similar features (correlation, memory) in a detailed record of temporal variations of a physiological parameter, while the very same record can also be analysed according to the concept of deterministic chaos attempting to find the minimal set of often simple differential equations capable of producing the erratic, random dynamics of time series on deterministic grounds. 2. Concept of fractal geometry Fractal geometry is rooted in the works of late 19th and early 20th century mathematicians who found their fancy in generating complex geometrical structures from simple objects like a line, a triangle, a square, or a cube (the initiator) by applying a simple rule of transformation (the generator) in an infinite number of iterative steps. The complex structure that resulted from this iterative process proved equally rich in detail at every scale of observation, and

Fractal time series analysis

Initiator

R3

Generator

Fractal

1D von-Koch curve

2D Sierpinski triangle

3D Menger sponge

Figure 1. Ideal, mathematical fractals. These structures can be generated by repetitively applying a single rule of generation (generator) to the object on subsequent scales starting out with the one on the largest (initiator). Note that this procedure in a mere three generations result in a complex, self-similar structure (fractal).

when their pieces were compared to larger pieces or to those of the whole, they proved similar to each other (see von Koch curve, Sierpinski gasket, Menger sponge (figure 1), Cantor set, Peano and Hilbert curve (Peitgen et al 1992)). These peculiar-looking geometrical structures lay dormant until Benoit Mandelbrot realized that they represent a new breed of geometry suitable to describe the essence in the complex shapes and forms of nature (Mandelbrot 1982). Indeed, traditional Euclidean geometry with its primitive forms cannot describe the elaborated forms of natural objects. As Mandelbrot (1982) put it: ‘Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.’ Euclidean geometry can handle complex structures only by breaking them down into a large number of Euclidean objects assembled according to an equally large set of corresponding spatial coordinates. The complex structure is thus converted to an equally complex set of numbers unsuitable to grab the essence of a design or to characterize its complexity. In nature, similarly to the iterative process of the von Koch curve, complex forms and structures, such as a tree, begin to take shape as a simple initiator, the first sprout that is, and evolve by reapplying the coded rule of the generator by branching dichotomously over several spatial scales. A holistic geometrical description of a tree is thus possible by defining the starting condition (initiator), the rule of morphogenesis (generator) and the number of scales to which this rule should be applied (scaling interval). 3. Properties of fractal structures and processes Unlike Euclidean geometry that applies axioms and rules to describe an object of integer dimensions (1, 2 and 3), the complex geometrical objects mentioned above can be characterized by recursive algorithms that extend the use of dimension to the noninteger range (Herm´an et al 2001). Hence, Mandelbrot named these complex structures fractals emphasizing their

R4

A Eke et al

Exact fractal

Statistical fractal

Mandelbrot tree

Pial arterial network

Figure 2. Two basic forms of fractals. Using the example of spatial fractals, the two basic forms of fractals are demonstrated. An exact fractal is assembled from pieces that are an exact replica of the whole, unlike the case of statistical fractals where exact self-similar elements cannot be recognized. These structures, like the skeletonized arborization of a pial arterial tree running along the brain cortex of the cat, are fractals, too, but their self-similarity is manifested in the power law scaling of the parameters characterizing their structures at different scales of observation.

fragmented character, and the geometry that describes them as fractal geometry using the Latin word ‘fractus’ (broken, fragmented). Fractals cannot be defined by axioms but as a set of properties instead, whose presence indicates that the observed structure is indeed fractal (Falconer 1990). 3.1. Self-similarity Pieces of a fractal object when enlarged are similar to larger pieces or to that of the whole. If these pieces are an identical rescaled replica of the other, the fractal is exact (figure 2). When the similarity is present only in between statistical populations of observations of a given feature made at different scales, the fractal is statistical. Mathematical fractals such as the Cantor set or the von Koch curve are exact; most natural fractal objects are statistical. Self-similarity needs to be distinguished from self-affinity. Self-similar objects are isotropic; i.e. the scaling is identical in all directions, therefore when self-similarity is to be demonstrated the pieces should be enlarged uniformly in all directions. Self-affine objects are also fractals, but scaling is anisotropic, i.e. in one direction the proportions between the enlarged pieces are different from those in the other. This distinction is, however, often smeared and for the purpose of being more expressive, self-similarity is used when self-affinity is meant (Beran 1994). Formally, physiological time series are self-affine temporal structures, because the units of their amplitude is not time (figure 3) (Eke et al 2000). 3.2. Power law scaling relationship When a quantitative property, q, is measured in quantities of s (or on scale s, or with a precision s), its value depends on s according to the following scaling relationship: q = f (s).

(1)

When the object is not fractal, the estimates of q using progressively smaller units of measure, s, converge to a single value. (Consider a√ square of 1 × 1 m, where q is its diagonal. Estimates of q in this case converge to the value of 2 m.) For fractals, q does not converge but, instead exhibits a power law scaling relationship with s, whereby with decreasing s it increases without any limit q = ps ε

(2)

Fractal time series analysis

R5

652

Blood Cell Perfusion, a. u.

R=20 508

640

0

395 R=2-3 327

0

80

296 R=2-5 247

0

20

T ime, seconds Figure 3. Self-affinity of temporal statistical fractals. Fractals exist in space and time. Here, blood cell perfusion time series monitored by laser-Doppler flowmetry (LDF) from the brain cortex of an anesthetized rat is shown (Eke et al 2000). The first 640 elements of the 217 elements of the LDF time series are shown that were 3.2 s long in real time. Note the spontaneous, seemingly random (uncorrelated) fluctuation of this parameter. Scale-independent, fractal temporal patterns in these blood cell perfusion fluctuations can be revealed. Compare the segments of this perfusion time series, displayed at different resolutions given by R shown on the right. If any enlarged segment of the series is observed at scale s = 1/R and its amplitude is rescaled by RH, where H is the Hurst coefficient, the enlarged segment is seen to look like the original. The impression is that the segments have a similar appearance irrespective of the timescale at which the signal is being observed. The degree of randomness resulting from the range of excursions around the signal mean blending different frequencies into a random pattern, indeed seems similar. Because scaling for this structure is anisotropic in that in one direction (time) the proportions between the enlarged pieces is different than in the other (amplitude of perfusion), this structure is not self-similar but self-affine. (For this particular time series H = 0.23.)

where p is a factor of proportionality (prefactor) and ε is a negative number, the scaling exponent. The value of ε can be easily determined as the slope of the linear regression fit to the data pairs on the plot of log q versus log s: log q = log p + ε log s.

(3)

Data points for exact fractals are lined up along the regression slope, whereas those of statistical fractals scatter around it (figure 4) since the two sides of equation (3) are equal only in distribution. 3.3. Scale-invariance The ratio of two estimates of q measured at two different scales, s1 and s2, q2/q1 depends only on the ratio of scales (relative scale), s2/s1, and not directly on the absolute scale, s1 or s2 q2 /q1 = ps2ε /ps1ε = (s2 /s1 )ε .

(4)

For exact fractals s2/s1 is a discrete variable. For instance, in case of the von Koch curve seen in figure 1, s2/s1 can only assume values of a geometrical series such as 1/3, 1/9, . . . etc. For statistical fractals, like those of nature, s2/s1 may change in a continuous fashion still leaving the validity of equation (4) unaffected. This equation is commonly referred to as the (absolute)

R6

A Eke et al

6 Mandelbrot tree: +

5 Pial fractal:

.

log q

4 3 2 1 0 0

1

2

3

log s Figure 4. Power law scaling relationship for exact and statistical fractals. The scaling exponent, ε, relating to a quantitative property of the object, q, observed at a scale, s, can be easily determined based on equation (3) on a log–log scale of q versus s. Data points for exact fractals (Mandelbrot tree of figure 2) are lined up along the regression slope, whereas those of a statistical fractal (pial arterial network of figure 2) are scattered along it, since for these fractals equation (3) equals only in distributions.

scale-invariant property of fractals. What is behind this scale-invariance is the self-similarity of fractals: quantitative properties of the pieces because the uniform geometrical structure depends only on the ratio of the scales and not on the absolute scale. With regard to the von Koch curve in figure 1, if property q is considered its length and is measured with a 1/3 shorter stick, due to the emergence of newer infoldings of the structure it will be found q2/q1 = 4/3 times longer, which yields a scale-invariant scaling exponent  ε 1 log 4/3 4 = ≈ −0.2619. (5) ε= 3 3 log 1/3 Equation (4) holds only for exact fractals. For statistical fractals the two sides of equation (4) are equal only in distribution (=d must be used instead of =) q2 /q1 =d ps2ε /ps1ε =d (s2 /s1 )ε .

(6)

3.4. Scaling range For natural fractals scale-invariance holds only for a restricted range of the absolute scale (Avnir et al 1998) and these fractals are often referred to as prefractals (Bassingthwaighte et al 1994). The upper limit of validity, or the upper cut-off point of equation (7), smax, for prefractals falls into the range of the size of the structure itself, likewise the lower cut-off point, smin, falls into the dimensions of the smallest structural elements. The scaling range, SR, is given in decades SR = log10 (smax /smin ).

(7)

3.5. Euclidean, topological and fractal dimensions The Euclidean or embedding dimension, E, is an integer indicating that the considered geometrical object is found on a line (E = 1), on a plane (E = 2), or in three space

Fractal time series analysis

R7

.

Z

P

Y

Topological dimension of P X=1, Y=-1 Euclidean dimension of P X=1, Y=-1, Z=0.1

X

Figure 5. Euclidean and topological dimension. On the example of point P on a warped surface the principle of these two dimensions are demonstrated.

(E = 3). E gives the number of coordinates needed to determine the position of a point of the object in space, as it follows from the concept of dimensions in Euclidean geometry. The topological dimension, DT, indicates the numbers needed to determine the position of a point on the actual geometrical structure. Lines, surfaces and three-dimensional solids have DT of 1, 2, and 3. Note, however, that because DT  E, for the warped surface shown in figure 5 DT = 2 and E = 3. The fractal dimensions, D, although having multiple definitions, have one thing in common: their value is usually a noninteger, fractional number, hence this dimension is referred to as fractal. For this very reason Mandelbrot named these structures fractals. D is related to the scaling exponent of a given attribute of the object in a simple, straightforward manner and characterizes one specific feature of a fractal. Mandelbrot defined fractals as structures with DT  D  E. Self-similarity dimension (Dss). Dss tells how many structural units of the observed object (i.e. line segments of the von Koch curve), N, are seen at a given resolution, R = 1/s N = R Dss

which yields Dss = log N/ log R.

(8)

Dss can only be determined for an exact fractal, which strictly obeys a single iterative rule of generation, like the von Koch curve. For this structure Dss = 1 − ε, where ε here is the scaling exponent of the length of the curve (Bassingthwaighte et al 1994), thus using equation (5) Dss yields 1.2619. As its fractal (fractional) dimension suggests, this structure indeed occupies more space than a line (E = 1) but less than a 2D surface (E = 2). Capacity dimension (Dcap) and its simplest implementation, the box-counting dimension (Dbox). The concept of capacity dimension was developed to estimate Dss for real, nonexact, statistical fractals. Dcap is a generalization of Dss and is calculated as follows. It uses ‘balls’ whose dimension equals E of the space in which the object is embedded. For E = 1 the ball is a line segment of length 2r, for E = 2 it is a circle with radius r, and for E = 3 it is a sphere with radius r. The object is to be covered by balls so that its every point is enclosed within at least one ball. It is important to note that overlapping balls are allowed. The minimum number of balls of size r needed to cover the object, N(r), is found, then r is decreased and N(r) is found again. Dcap tells how the number of balls needed to cover the object changes as the size of the ball is decreased (Bassingthwaighte et al 1994). N = R Dcap

where R = 1/r

(9)

and the radius of the ball covering the whole object is 1. Equation (9) gives a precise mathematical definition of Dcap, but it does not allow for an effective calculation of it. Dbox is a practical implementation of the algorithm of Dcap. It uses a nonoverlapping grid of boxes

R8

A Eke et al

instead of overlapping balls, which greatly simplifies the procedure and makes the calculation of D applicable to traced 2D or 3D objects feasible. The essence of this method is to cover the image or structure by this grid of nonoverlapping boxes of various edge lengths, L = 1/R, and to determine the number of boxes covering any part of the object N. N is determined for progressively smaller box sizes. Dbox is given as the slope of a linear regression fit to data pairs on a log–log plot of N as a function of R N = R Dbox

which yields Dbox = log N/ log R

(10)

and the edge length of the box containing the whole object is 1. The fractal dimension for a line segment, square and cube equals the Euclidean and the topological dimension and are 1, 2, and 3, respectively. For 2D and 3D fractals, D falls in between these landmark values and gives a good characterization of the space-filling properties of the structure. The more D differs from DT and is closer to E, the more the structure invades the Euclidean space. At the beginning, Mandelbrot emphasized the fractional value of D. He also noted, however, that fractals could be found with integer D like the Peano curve whose D = 2 (Mandelbrot 1982, Peitgen et al 1992). Self-similarity is therefore a more important attribute of fractals than their noninteger, fractal dimensions. Definitions of the fractal character of physiological signals are very diverse in the literature. One finds terms like ‘self-similar signal (or process)’, ‘1/f noise’, ‘long-memory process’, ‘power-law process’, ‘fractional Gaussian noise (fGn)’, ‘fractional Brownian motion (fBm)’, and ‘fractional ARIMA process (fARIMA)’. Each of these terms has its own definition and in fact they are interrelated in specific ways that will be explored in the following. 4. Concept and methods of fractal time series analysis The aim of fractal time series analysis is to identify the presence of one or more of the following fundamental features: self-similarity, power law scaling relationship and scaleinvariance. Fractal analysis of time series cannot be done without an a priori notion of a model incorporating the most essential features of the underlying physiological process(es). The analytical tool should be selected according to the proposed model. Fractal methods are diverse, but their approaches have one thing in common in that they employ equation (2) in fitting their proposed model to data pairs of log feature versus log scale for finding the scaling exponent, ε, from the regression slope. The working hypothesis is that behind the complex— seemingly random—fluctuations of the signal one may find a time-invariant mechanism that one would wish to describe in the smallest possible number of parameters. A time series is defined as a discrete series, xi, for i = 1, . . . , N, and is a sampled representation of a temporal process or signal, x(t) sampled at equal intervals in time, t. The sampling frequency, fs = 1/t. The length of the series is often referred to as N, that should clearly be discerned from the actual length (duration) of the series in real time, T = N t. A time series is therefore a digital signal, thus its analysis is often called digital signal processing (Weitkunat 1991). Methods of time series analysis can be used in cases when x varies not in time but space, in which case the notion of time series is extended to all ordered sequences of observed data. Temporal (or for that matter, spatial) sampling requires the constancy of the sampling interval, in which case the resultant series is equidistant. Application criteria of the analytical methods are often relaxed, hence they seem to work on nonequidistant datasets, too. Caution is warranted, though, since nonequidistant datasets do require special analytical treatment. The term ‘fractal time series’ in the literature in general refers to a single fractal or monofractal temporal signal. Multifractal time series are heterogeneous, self-similar only

Fractal time series analysis

R9

in local ranges of the structure, their fractal measure does vary in time; hence they can be characterized by a set of local fractal measures (Bassingthwaighte et al 1994, Ivanov et al 1999). This review is about monofractal time series analysis only, for its methods are widely used in biomedical research in contrast with those for multifractal analyses which are making their first, though by no means promising, steps towards a wider scope of application (Ivanov et al 1999). 4.1. Time domain methods If data are collected as a function of time, the series containing these data is said to be in the time domain. Analyses done on signals without any prior transformation are referred to as time domain methods (Weitkunat 1991), aimed at identifying the statistical dependence of the elements of the time series in contrast to descriptive statistical approaches providing such measures as the mean, standard deviation etc that treat the elements of the sample as independent events. In this sense, these two approaches are complementary. In fact, some fractal time domain methods like dispersional analysis, scaled windowed variance, or detrended fluctuation analysis etc do take descriptive statistical measures of the time series although in a scale-dependent manner that allows revealing of the scale-invariant properties. A fundamental property of a signal that the physiologist must be aware of is its stationarity. We will see below that it bears importance in fractal analysis of time series too, because stationary and nonstationary signals require different methods in their analyses. For a stationary signal the probability distribution of signal segments is independent of the (temporal) position of the segment and segment length,which translates into constant descriptive statistical measures such as mean, variance, correlation structure etc over time (figure 6). According to a simple, yet realistic dichotomous model, signals are seen as realizations of one of two temporal processes: fractional Brownian motion (fBm), and fractional Gaussian noise (fGn) (Eke et al 2000). The fBm-signal is nonstationary (figure 6) with stationary increments. Physiological signals are generally nonstationary. An fBm signal, x, is selfsimilar in that its sampled segment xi,n of length n is equal in distribution to a longer segment xi,sn of length sn when the latter is rescaled (multiplied by s−H). This means that every statistical measure, mn, of an fBm time series of length n is proportional to nH xi,n =d s −H xi,sn mn =d pnH

(11) which yields

log mn =d log p + H log n

(12)

where H is the Hurst-coefficient and p is a suitable prefactor. H ranges between 0 and 1. H approaching 1 describes a signal of smooth appearance, while H close to 0 is one with rough structuring (‘hairy’ appearance). Increments yi = xi − xi−1 of a nonstationary fBm signal yield a stationary fGn signal and vice versa, cumulative summation of an fGn signal results in an fBm signal (figure 7). By definition they have the same H as their fractal measure. Note that most methods listed below that have been developed to analyse statistical fractal processes, thereby applicable in physiology, share the philosophy of equation (12) in that in their own ways all attempt to capture the power law scaling in the various statistical measures of the evaluated time series. Note, that the quantitative properties (in equation (2): q) of a process as captured in its statistical measures (in equation (12): m), the scale (in equation (2): s) at which the process is investigated as determined by the width of the observation window (in equation (12): n) relate in the same power law manner for temporal and spatial fractals alike. Equation (2) gives the overall formalism of the fractal power law relationship, while equation (12) adapts it to the specific case of a statistical temporal fractal process.

R10

A Eke et al Stationary (fGn)

Nonstationary (fBm)

20

Amplitude

10

0

-10

-20 80

Variance

60

40

20

0

0

1000

2000

3000

Time

4000

0

1000

2000

3000

4000

Time

Figure 6. Stationary and nonstationary time series. The two pure monofractal time series (upper panels) were generated by the method of Davies and Harte (DHM) (1987) according to the dichotomous model of fractional Gaussian noise (fGn) and fractional Brownian motion (fBm) (Eke et al 2000). DHM produces an fGn signal, whose cumulatively summed series yields an fBm signal. These two signals differ in their variance (lower panels) in that the fGn signal is stationary, hence its variance is constant, unlike that of the nonstationary fBm signal whose variance increases with time. This difference explains why the analysis of these signals in the time domain require special methods capable of accounting for the long-term fluctuations and increasing variance in the fBm signal.

Hurst’s rescaled range analysis (R/S). The first method for assessing H was invented by Hurst (1951) when confronting the question of how high the Aswan dam had to be built so that it would contain the greatly varying levels of the Nile within a given window of observation, n (figure 8). The logic he followed was governed by the three criteria of an ideal reservoir: (1) the outflow is uniform, (2) the level is the same at the beginning and at the end of the observation window, (3) the reservoir never overflows (Bassingthwaighte et al 1994). He looked at restrospective records of water levels, xi, which are proportional to the velocity of water inflow to the dam. Hurst assumed a uniform outflow, which can be calculated as the mean of the varying inflow. The time series of the increase in water volume in the container is j given by the summed difference of inflow and outflow yj = i=1 (xi − x¯ i )t. The range of yj , R = ymax − ymin , determines how high the dam should be built. In addition, Hurst divided the range by the standard deviation of inflow fluctuations, S, and found much to the surprise of contemporary statisticians that R/Sn showed a power law scaling relationship with the length of observation n (R/S)n = pnH

which yields

log(R/S)n =d log p + H log n

(13)

where p is a prefactor. This method is now known as Hurst’s rescaled range (R/S) analysis. His work led Mandelbrot and Wallis (1969) to discover the widespread occurrence of self-similar fluctuations in natural phenomena. The R/S analysis is a rather complex method, which is applicable to fGn signals or to differenced fBm signals.

Fractal time series analysis

H

R11

fBm

fGn

0.1

0.5

0.9

Figure 7. Typical patterns of fractional Gaussian noise (fGn) and fractional Brownian motion (fBm) signals. Three signals are shown in each signal category of the fGn/fBm model with a Hurst coefficient, H = 0.1, H = 0.5, and H = 0.9. The case of H = 0.5 is the special case of uncorrelated (random) white noise and random walk (Eke et al 2000). The appearance of the fBm signal with H = 0.1 is rather ‘hairy’ due to the strong anticorrelation in between the successive differences (increments) of this series, while due to the noisy appearance of the fGn signals this cannot readily be seen for the fGn signal with H = 0.1. The case of H = 0.9 shows the correlated structuring of fractal motion and noise signals.

Autocorrelation analysis (AC). The aim is to determine to what extent the value of one given event, xi, of the time series depends on its past values k lag apart, xi−k. The k-lag autocorrelation coefficient, ρ k, defines how strongly the momentary value of the signal depends on the one k lag before. The k-lagged autocorrelation may be positive, zero, or negative within boundaries of 1 and –1 (figure 9, note that for fGn signals the lower bound is −0.5!). A positive correlation indicates that the trend of deviation of xi and xi−k relative to the mean of the signal is in the same direction, while with negative values of ρ k the trend is opposite (anticorrelation). Estimates of autocorrelation for lags k = 0, 1, . . . , can be calculated as N N 1 ˆ (xi−k − µ) ˆ 1  i=k+1 (xi − µ) N −k−1 where µ ˆ = xi . (14) ρˆ k =  N 1 N i=1 ˆ 2 t=1 (xi − µ) N −1

The autocorrelation function of an fGn-signal with 0  H  1; k = 0, 1, . . . , is ρk = 12 (|k + 1|2H − 2|k|2H + |k − 1|2H )

(15)

where H is the Hurst-coefficient of the fBm signal whose increments yield the fGn signal in question. Equation (15) can be derived from equation (12) (Beran 1994). The closer H is to 1 the slower ρ k decays, in other words the longer is the memory of the process (figure 9). On this ground, fractal signals are often called long-memory processes. Note, however, that in the mathematical sense long-memory correlation (or long-range dependence) is for 0.5 < H < 1 (Beran 1994), hence H = 0.5 when there is no correlation in the fGn signal (a typical random white noise), and when H < 0.5 the fGn signal shows anticorrelation (Eke et al 2000). The correlation for the nearest neighbours were determined by van Beek et al (1989) as ρ1 = 22H −1 − 1.

(16)

R12

A Eke et al

Water level (fGn signal)

xi

S

Cumulative difference of inflow and out flow (fBm signal)

yi

R

n

Figure 8. Basic concept of Hurst’s rescaled range method. This is a time domain method. It calculates a descriptive statistical measure, the standard deviation, S, of a noise signal, xi, within windows of observation marked by dashed lines, and the range of excursions, R, from the cumulative series of xi, a motion signal, yi. It uses the scaling relationship between the rescaled range (R/S) and the scales of observation (window length, n), for determining the Hurst coefficient, 0 < H < 1.

Bassingthwaighte and Beyer (1991) worked out a method to determine H by fitting the theoretical autocorrelation function (equation (15) to the first few estimates of ρ k obtained by equation (14). This method is termed autocorrelation analysis and can be applied to fGn signals or to differenced fBm signals. Scaled windowed variance analyses (SW V ). Fluctuations of a parameter over time can be characterized by calculating the variance, or according to equation (17) its square root, the standard deviation   N  1  ˆ 2. (17) σˆ =  (xi − µ) N − 1 i=1 For fBm processes of length N when divided into nonoverlapping windows of size n (figure 10)—as equation (12) predicts—the standard deviation within the window, σn, depends on the window size n in a power law fashion: σn =d pnH .

(18)

This method, originally introduced by Mandelbrot (1985) was further developed by Peng et al (1994). By taking the logarithm of equation (18) the following practical formula is obtained: log σn =d log p + H log n.

(19)

Fractal time series analysis

R13

1 H=0.99

0.5

ρ

k

H=0.9 H=0.6 0 H=0.4 H=0.2

for fGn-signals (stationary)

H=0.01

-0.5 0

5

10

k Figure 9. Autocorrelation functions of exact fractal noise time series with different Hurst coefficients. The autocorrelation coefficient, ρ k, given by equation (15) is shown as a function of the lag, k, of the elements in the series. The correlation is maximum (ρ k = 1) at lag 0, a trivial case. The correlation decays slower than an exponential function of time (lag) for cases H > 0.5, a phenomenon which is termed long-memory or long-range dependence. At the level of H = 0.5 (uncorrelated white noise) ρ k = 0. Anticorrelated fractal noises have negative nearest neighbour correlations (−0.5 < ρ k < 0) (van Beek et al 1989).

H can be found as the slope of the linear regression fit to the data pairs on the plot of log σ n versus log n. In practice, σ n’s calculated for each segment of length n of the time series are averaged for the signal at each window size. Note, that n cannot vary from 1 to N, because using too short (N pieces of length n = 1) or too few (1 piece of length n = N ) segments would distort the result. The standard method applies no trend correction. Trends in the signal seen within a given window can be corrected either by subtracting a linearly estimated trend (line-detrended version) or the values of a line bridging the first and last values of the signal (bridge-detrended version, bdSWV) (Cannon et al 1997). This method can only be applied to fBm signals or cumulatively summed fGn signals. Dispersional analysis (Disp). This statistical approach—originally using relative dispersion of spatial data—was introduced by Bassingthwaighte (1988). Later it was extended to the temporal domain (Bassingthwaighte and Raymond 1995). It is similar to SWV with the difference being that σ is the standard deviation of the windows’ means (figure 10) σn =d pnH −1

(20)

log σn =d log p + (H − 1) log n.

(21)

This method can only be applied to fGn processes or to differenced fBm signals. Detrended fluctuation analysis (DFA). This method, introduced by Peng et al (1994), was developed to improve on root mean square analysis of highly nonstationary data, by removing nonstationary trends from long-range-correlated time series. First the signal is summed and the mean is subtracted yj =

j  i=1

xi − µ. ˆ

(22)

R14

A Eke et al

SWV µσ σi

σi+1

σ i+n

fBm

summing

differencing

....

fGn

....

µi

µi+1

µi+n

σµ

Disp Figure 10. Basic concept of dispersional (Disp) and scaled windowed variance (SWV) methods. These are time domain methods. The signal for Disp was generated by the DHM method and cumulatively summed for SWV according to the relations of the fGn/fBm model shown on the left. Disp and SWV are similar in that they calculate the same descriptive statistical measures as a function of scale for the purpose of deriving the transscale parameter, H. For Disp the local measure is the mean (µ), the partition-based measure is the standard deviation of local means (σ µ), for SWV the local measure is the standard deviation (σ ), the partition-based measure is the mean of local standard deviations (µσ ).

Then the local trend yj,n is estimated in nonoverlapping boxes of equal length n, using a least square fit on the data. For a given box-size n the fluctuation is determined as a variance upon the local trend, i.e. the root mean square (RMS) of differences from it,   N 1   Fn = (yj − yj,n )2 N j =1

(23)

in which the DFA method is closely related to the line-detrended SWV method. Recall that SWV computes standard deviation relative to the mean, after subtracting the local trend, while DFA directly calculates the standard deviation-like measure, RMS variance, upon the local

Fractal time series analysis

R15

trend. For self-similar fBm processes of length N with nonoverlapping windows of size n the fluctuation, F, depends on the window size n in a power law fashion Fn =d pnα

(24)

log Fn = log p + α log n.

(25)

and

If xi is an fGn signal, then yj will be an fBm signal. Fn then is equivalent to mn of equation (12) yielding Fn =d pnH; therefore in this case α = H. If xi is an fBm signal then yj will be a summed fBm signal. Then Fn =d pnH +1 , where α = H + 1 (Peng et al 1994). 4.2. Frequency domain methods The time domain is not the only domain in which sampled data could be represented. Out of the many domains resulting from transformations of time series data, one of the most useful is the frequency domain (Weitkunat 1991). Power spectral density analysis (PSD, periodogram method, Fourier analysis). A time series can be represented as a sum of cosine wave components of different frequencies  N/2 N/2   2πn i + ϕn xi = (26) An cos[ωn ti + ϕn ] = An cos N n=0 n=0 where An is the amplitude and ϕ n is the phase of the cosine-component with ωn angular (i.e. fn = ωn/2π cyclic) frequency. The An (fn ), φn (fn ) and A2n ( fn) functions are termed as amplitude, phase and power spectrum of the signal, respectively. These spectra can be determined by an effective computational technique, the fast Fourier transform (FFT). The power spectrum (periodogram, power spectral density or PSD) of a fractal process is a power law relationship A2n =d pωn−β

which yields

log A2n =d log p − β log fn

(27)

where β is termed spectral index. Signals with this form of power spectrum are referred to as 1/f noise. The power law relationship expresses the idea that as one doubles the frequency the power changes by the same fraction (2−β ) regardless of the chosen frequency, i.e. the ratio is independent of where one is on the frequency scale. The signal has to be preprocessed before applying the FFT, which means subtracting the mean, multiplying with a parabolic window (windowing) and bridge detrending (endmatching). After calculating the power spectrum using a FFT, the high-frequency part of the spectrum should be excluded before fitting the regression line (Fougere 1985, Eke et al 2000). 1/f noises with –1 < β < 1 and 1 < β < 3 are almost identical with fGn and fBm signals, respectively (Beran 1994). Disregarding the almost has led to some misunderstanding and misinterpretation in fractal time series analysis (see below). Coarse graining spectral analysis (CGSA) separates the fractal and the harmonic components in the signal and can thus estimate the spectral index, β, without the interference of the latter (Yamamoto and Hughson 1991). The power of the fractal components can also be given as a per cent fraction. Its concept is based on the self-affine structuring of the fractal processes as defined by equations (11) and (12). When an n length segment of an N length signal is rescaled in intensity by (n/N )H, the original signal and its rescaled (coarse-grained) segment remain equal in distribution in all of their statistical measures. The cross-power spectrum of the two is obtained by calculating the power from the spectral estimates of the original and the coarse-grained series. If the original signal is a fractal process, the cross-power

R16

A Eke et al

spectrum equals that of the original. In the case of a simple harmonic signal, the cross-power spectrum approaches 0. Based on these relations, a mixed signal yields a cross-power spectrum of the broad band 1/f, fractal type. Hence, a ratio of the summed cross-power and that of the original gives the relative power of the fractal fraction in the signal. This original version of the method had to be improved since a pure harmonic signal with frequencies scaled by (n/N) was seen by the original method as pure fractal. This problem later was eliminated by testing for the phase angle of the coarse-grained spectrum (Yamamoto and Hughson 1993). The phase relationship in the fractal time series is random, while in the harmonic oscillations it is fixed, thereby allowing a discrimination of fractal and harmonic components. Following their separation, β is calculated from the fractal power slope, the slope of the log–log fractal power spectrum as shown earlier. 4.3. Time-frequency (time-scale) domain analysis Dennis Gabor (1946) adapted the Fourier transform to analyse only small segments of the signal at a time—a technique called windowing the signal—and produced the short-time fourier transform (STFT). STFT maps a signal into a two-dimensional function of time and frequency. It provides simultaneous information about the location in time and the particular frequency at which a signal event occurs. However, this information is of limited precision determined by the size of the window, which is the same for all frequencies. Many signals require a more flexible approach. Fractal wavelet analysis represents a windowing technique with variable-sized regions thus allowing an equally precise characterization of low- and high-frequency dynamics in the signal. A wavelet is a waveform of limited duration with an average value of zero. Fourier analysis breaks up a signal into cosine wave components of various frequencies. The wavelet analysis breaks up a signal into shifted and stretched versions of the original wavelet. In other words, instead of a time-frequency domain it rather uses a time-scale domain, which is extremely useful in the fractal analysis. The Hurst coefficient can be estimated in three ways: by the wavelet transform modulus maxima method (WTMM) (Arn´eodo et al 1995), the wavelet packet analysis (Jones et al 1996), and the averaged wavelet coefficient (AWC) method (Simonsen and Hansen 1998), the latter being the most effective and the easiest to implement (figure 11). The shifted (translated) and stretched (dilated, scaled) versions of the original wavelet function ψ(t) can be defined as   t −b (28) ψa;b (t) = ψ a where the scale (dilation) parameter is a > 0, and the translation parameter −∞ < b < ∞. The continuous wavelet transform (CWT) of a given a function x(t), is defined as

∞ 1 ψ ∗ (t)x(t) dt. (29) W [x](a, b) = √ a −∞ a;b Here ψ a,b∗(t) denotes the complex conjugate of ψ a,b(t). From equation (11), one can easily derive how the self-affinity of an fBm signal x(t) determines its CWT coefficients: 1

W [x](sa, sb) =d s 2 +H W [x](a, b).

(30)

The AWC method is based on equation (30) (Simonsen and Hansen 1998). This method can be applied to fBm signals or to cumulatively summed fGn signals.

Fractal time series analysis

R17

Htrue = 0.3 fBm-signal

Amplitude

Time series

Averaged wavelet coefficient method

0

1000

Wavelet analysis

Time

a=96

96

48

1

0

1000

130

70

W[x](a,b)

AWC: W[x](a,b) averaged by b at a

128

Scale, a

Wavelet coefficient matrix

a=48

10

Time, b

4.5

Log AWC

H=slope - 0.5 Hest =0.31 3

1.5

1

3

5

Log a

Figure 11. Steps of analysis by the averaged wavelet coefficient method (AWC). This is a timefrequency domain analysis applicable to fractal motion signals (upper panel). The signal was generated at H = 0.3 by the DHM method and cumulatively summed (fBm). The length of the signal is 1000 in this example. First a wavelet series is defined by equation (28) (‘Mexican hat’) with dilation parameter, a, and while being moved along the timescale in steps given by its translation parameter, b, at each step a wavelet coefficient is obtained between the wavelet and the given segment of the series. The wavelet coefficient matrix (middle) is scaled by W [x](a, b) shown as an intensity scale on the right. Its values are averaged at every step of scale, a, yielding the AWC shown in the intensity code on the right. The Hurst coefficient is estimated from the slope of log AWC versus log a. In the given example the true and estimated H’s are very similar, which is indicative of the precision of this method.

R18

A Eke et al

4.4. Linear system analysis of fractal time series (fARIMA) This approach builds on the memory in the analysed system. Any given value xi of the time series depends on the preceding values with lags k = 1, . . . , r as a linear combination of the values within this range. The simplest variant of the ARIMA (autoregressive integrated moving average) models is the AR(r) model (autoregressive model of order r). It can be formulated as (Beran 1994) xi =

r 

αk xi−k + εi

(31)

k=1

where α k are the autoregressive coefficients. The value of xi—in addition to previous values of the series—is also modified by a random noise εi, which represents a noisy perturbation acting on the system. The value of α k can be estimated by ˆ = (X X)−1 X Y α    xr αˆ 1  ..  ..  α ˆ = .  X = . xN −1 αˆ r

··· .. . ···

 x1  ..  . xN −r



 xr+1   Y =  ... . xN

(32)

In the fAR (fractional AR) model for α k the following equation holds: αk = −

−-(k − d) . -(k + 1)-(−d)

(33)

For large lag k we have αk =d pk −d−1

where d = H −

1 2

and p =

−1 -(−d)

(34)

from where the Hurst coefficient, H, can be obtained. This model can be applied to an fGn signal or to a differenced fBm signal. 4.5. Methods of signal classification The fGn/fBm dichotomous model is simple, yet effective in providing a framework for understanding the basic characteristics of random statistical processes with self-similar scaling properties. The signals in physiology can be regarded as analogous to one of these two cases of discretely sampled fractal processes. From the above survey of fractal tools it follows that some of them can either be applied to stationary fractional Gaussian-type or the nonstationary fractional Brownian-type signals, at least in particular cases after cumulative summing of the former or differencing of the latter (Eke et al 2000). As shown above, signals must be classified according to this model prior to analysis, a circumstance not clearly recognized earlier (Eke et al 2000). Classification by the power spectral analysis (its improved variant, lowPSDw,e). Signal classification should begin with the application of the PSD method, since it can be applied to fractal noises and motions alike and provides a basic parameter of a time series, its spectral index, β. The purpose is to determine the signal class based on β. This can best be found if the high-frequency spectral estimates are excluded from fitting for the spectral slope, −β (Eke et al 2000). If the estimate of β, βˆ < 1 then the signal is fGn, if βˆ > 1 then it is fBm with some range of uncertainty in between.

Fractal time series analysis

R19

Table 1. Conversion of various fractal measuresa. α

β

HfBm

HfGn

D

3−α

5−β 2

2 − HfBm





HfBm

α−1

β−1 2





2−D

HfGn

α

β+1 2







β

2α − 1



2HfBm + 1

2HfGn − 1

5 − 2D



β+1 2

HfBm + 1

HfGn

3−D

D

α ρ1







2HfGn−1

2

−1



a α:

a parameter of DFA method, β: spectral index of the PSD method, HfBm: the Hurst coefficient of an fBm signal, HfGn: the Hurst coefficient of an fGn signal, D: fractal dimension (for fBm signals, only!), ρ 1: nearest neighbour correlation coefficient.

Classification by the detrended fluctuation analysis (DFA) (Peng et al 1994) is possible based on the value of α. ˆ If αˆ < 1 then the signal is fGn, if αˆ > 1 then it is fBm with some range of uncertainty in between. Classification by the signal summation conversion method (SSC). This method was recently developed by Eke et al (2000) in order to refine the analysis near the fGn/fBm boundary. First it converts an fGn to an fBm signal or an fBm to a summed fBm signal by calculating its cumulative sum. Then it applies bdSWV to calculate from the cumulant series the Hurst



coefficient Hˆ . When 0 < Hˆ  1 the signal is an fGn with Hˆ = Hˆ . Alternatively, when



Hˆ > 1 SSC finds the signal an fBm with Hˆ = Hˆ − 1. 5. Converting various fractal measures (spectral index, Hurst coefficient, fractal dimension, etc) Although fractal methods differ in their operational domain and the way they produce quantitative measures of self-similarity, there are still common grounds in the ‘fractal domain’. Examples: the Hurst coefficient (exponent, H), and the DFA-exponent (α) assume and characterize power law scaling in the time domain, and the spectral index (β) does the same in the frequency domain: they are still (almost) equivalent to each other. The wavelet fractal analysis operates in the time-scale domain and fARIMA is a linear system analysis. They in fact assume time domain power law scaling. The box-counting method generally used to calculate the fractal dimension of spatial objects can also be applied to a self-similar temporal signal (fBm) treating it as a picture, a two dimensional object (Peitgen et al 1992). Irrespective of these differences, fractal measures do relate to each other in a simple manner as shown in table 1. A concise overview of most of these fractal measures and a pictorial demonstration of the corresponding signal character can be found in the paper of Eke et al (2000, figure 2). The most universal parameter is the spectral index, β, that covers the full range of the fGn/fBm model from the roughest fractal noise signals (β = −1) to the smoothest fractal motions (β = 3). Hence, β can be regarded as the base for converting fractal estimates, like from the H of SWV or Disp methods, to the α of the DFA. This way, a reliable comparison of reported fractal estimates obtained by different methods can be carried out. Note that D can be calculated for a self-similar

R20

A Eke et al

Sampling frequency, fs low |A|2

high N=3

|A|2

N=3

short

Signal length, N

few

fN 2

|A|

f N=6

fN |A|

2

f

Frequency components

N=6

long

many

fN

f

strong

fN

f

weak

Aliasing Figure 12. Influence of the principal aspects of frequency domain analysis on the precision of estimating the true spectrum of the signal. Signal length and sampling frequency, represented frequency components of the spectrum, and an artifactual contribution to power known as aliasing are related to the measured (dashed line) and true spectrum (continuous line) as a function of absolute frequency, f . The Nyquist frequency, fN = fs /2, is the upper bound of the represented frequencies. Overestimation of power due to aliasing can be minimized if fs is chosen to be a high value. Representation of different frequency components is the most complete if N is larger.

signal of the fBm type only (see equation (11)). Also note that H is scaled between 0 and 1 irrespective of the class of the signal. Due to the dichotomy of the fGn/fBm model underlying the analysis, it is strongly recommended to report values of H along with the class of the signal as HfGn and HfBm (Eke et al 2000). This practice will guarantee that values of H obtained by a class-specific method for the wrong (incompatible) class do not get published. 6. Limitations of fractal time series analysis For some methods the limitations are better understood, than for others. Herewith, the cases of the PSD and AC methods are presented for they are fundamental in defining the two most important fractal properties of time series, power law scaling relationship of spectral power as a function of frequency (assessed by the PSD method by equation (27)) and the basically power law correlation structure (assessed by the AC method based on equation (15)). The standard version of the power spectral density method is weak in estimating the true spectral index of exact fractal time series (Eke et al 2000). One known problem is aliasing, an artifactual contribution to the power of spectral estimates aliased from beyond fs/2 (the Nyquist frequency, fN) into ranges below it in a fashion mirroring the shape of the spectrum (figure 12). Selecting fs high enough to minimize the power at the Nyquist frequency could treat this problem. Beran (1994) pointed out that a fractal signal with an exact correlation structure (generated by the method of Davies and Harte (1987)) deviates from a pure 1/f β power spectrum at high frequencies. This observation was confirmed by Eke et al (2000) in numerical experiments, which demonstrated that the high end of the frequency spectrum ( fs/8 < f < fs/2) needs to be excluded when fitting for the power slope to yield a reliable

Fractal time series analysis

R21

Autocorrelation Function 1

theoretical DHM + SSM

.

DHM SSM 0.5

ρk

ρ1 0

-0.5

1

2

3

4

5

6

7

8

9

0.1 0.2

0.3

0.4

0.5 0.6

0.7

0.8

0.9

1/ f 4N

fN

H

k Power Spectral Density Function H=0.2

H=0.2

log(|A|2)

H=0.8

H=0.8 Fractal noise (fGn) DHM SSM

log(f)

Fractal motion (fBm) DHM SSM 1/ f 4N

fN

log(f)

Figure 13. Characterization of the power law fractal signal generated by the spectral synthesis method (SSM) and the exact fractal signal produced by the DHM method. These signals can be described in their autocorrelation (top) and power spectral densities (bottom). Autocorrelation functions of DHM signals exactly follow the prediction of equation (15), while those of SSM signals tend to 0 at every lag and H (top left with H values shown top right). In other words, the nearest neighbour correlation coefficient, ρ 1, of DHM signals follows exactly the theoretically predicated values (equation (16)), while correlated SSM signals underestimate and anticorrelated SSM signals overestimate it. The SSM signals have ideal 1/f β power slopes while the DHM signals deviate in the high-frequency range, more so when the signal is anticorrelated. (The upward deviation for DHM motion signals can be due to the summing operation needed to generate these signals from the raw fGn signal produced by the DHM method.)

measure of power law scaling when exact self-similar signals generated by the Davies and Harte method based on equation (15) were used. The power spectrum and the autocorrelation function are interrelated in that the Fourier transform of the latter yields the power spectrum of the signal. Note, that high frequencies of the PSD function where dynamics are the fastest, are represented in the short lagged range of the AC function (figure 13). The long-lagged region of the AC function (at large k’s) corresponds to the low-frequency range of the PSD function. The autocorrelation analysis does not use all the lags for estimating H, for longer lags proved statistically unreliable due to their values being close to 0. Usually the first five lags are used. Most of the signals in physiology are, however, dominated by low frequencies, hence the AC method can present

R22

A Eke et al

problems in their analysis for it fits the high-frequency range of these signals where spectral estimates may well deviate from the ideal spectral slope (figure 13). This limitation of the AC method in the analysis of physiological signals cannot be compensated—although would be logical—by using its long lagged values for fitting H because of their being in the unreliable range as mentioned above. 7. Evaluating the methods’ performance None of the methods we have looked at so far were ideal; they cannot be, for their performance even on ideal fractal signals will depend on the algorithmic implementation of testing for the presence of statistical self-affinity defined by equation (12) and the way the method and signal in question will ‘interact’. Numerical testing on pure monofractal signals is helpful in providing a ranked list of methods according to their precision in estimating the fractal properties or signal class and in identifying cases when limitation of the method, or incompatibility of method and signal, can be demonstrated. 7.1. Generating ideal monofractal signals Ideal fractal signals of known H can be synthesized based on the two properties of fractal time series: power law scaling in the time domain (equation (12)) and in the frequency domain (equation (27)). Based on power law scaling properties in the frequency domain defined by equation (27) and using the Fourier transform algorithm the spectral synthesis method (SSM) produces 1/f noises, which are almost exact fGn- or fBm-type signals, depending on the value of β used. The method of Davies and Harte (1987) (DHM) produces an exact fGn signal using its special correlation structure (equation (15)), which is a consequence of the time domain power law scaling of the related fBm signal (equation (12)). Cumulative summing of the fGn signal generates the corresponding fBm signal. Both of these signals are fractals. 7.2. Testing fractal tools on ideal monofractal signals The fractal analytical methods can be tested on synthesized fractal signals of known H (Bassingthwaighte and Raymond 1994, 1995, Caccia et al 1997, Cannon et al 1997, Eke et al 2000). For each set of series of known H and particular signal length N the variance and bias are calculated. The mean squared error (MSE = var + bias2) gives a combined characterization of bias and variance in the estimates of H, Hˆ . Figure 14 gives an overview of the performance for a long enough signal length of 217 at which length all of the tested methods should perform at their best. For other signal lengths table 2 gives the MSE values obtained for DHM signals. Methods with the best overall performance can be selected from a rank-ordered list by MSE. Which of the two signal generating methods, SSM or DHM, is more suited for numerical testing? Before answering this question a fundamental issue about the strategy of numerical testing, not fully understood earlier, needs to be clearly seen. When methods of analysis and synthesis are based on the same algorithm, as in the case of the PSD method and SSM signal or the AC method and DHM signal, the result is almost perfect, provided the signal length is adequate: estimates of true H are with no bias and very small variance (Schepers et al 1992, Eke et al 2000) (figure 14, PSD/SSM/fBm and AC/DHM/fGn panels, respectively). The inference of these methods’ excellent performance under these specific circumstances should by no means be that these methods are ideal in a universal sense for the analysis of physiological signals, because physiological processes are produced by complex interactions in their underlying mechanisms, which do not necessarily result in ideal signal

Fractal time series analysis

R23 low

PSDw,e

DFA

AC

Disp

bdSWV

Non applicable

0.5 0.0010

0.0001

0.0000

0.0004

Non applicable

Non applicable

DHM

0.0029 0 1

0.5 0.0712

0.0011

0.0017

0.0000

0 1 Non applicable

0.5 0.0101

0.0002

0.0036

0.0339

0.0074

Non applicable

Non applicable

0

SSM

Estimated H Estimated H Estimated H Estimated H

fBm

fGn

fBm

fGn

PSD 1

1

0.5 0.0000 0

0

0.5

0.0002 1

0

0.5

0.0002 1

0

0.5

0.0012 1

0

0.5

1

0

0.5

1

0

0.5

1

True H

Figure 14. Result of numerical experiments on pure monofractal signals to characterize the precision of different fractal tools. The test signals were generated according to the dichotomous fGn/fBm model by the DHM and the spectral synthesis method (SSM). DHM yields an fGn signal, while SSM produces an fBm signal. The corresponding signals in the fGn/fBm model were created by differencing (fBm to fGn) or cumulative summing (fGn to fBm). A signal length of 217 was chosen to compare methods to ensure that the signal length has a minimal impact on precision (Eke et al 2000). A hundred signals were generated for each level of H from 0.1 to 0.9 in steps of 0.1. Vertical error bars (±SD) are shown indicating the variance of the method. Deviation from the line of identity between true H and estimated H (dotted line) characterizes the bias in the estimates of H. A combined measure of these two is given in the mean squared error (MSE) normalized by the step in H displayed in the lower right corner of each panel at a precision of four decimal places.

structuring of the SSM or DHM type. PSD would be ideal for their analysis only if they were pure realizations of the SSM algorithm as predicted by equation (12). Similarly, AC would yield highly reliable Hˆ for a physiological signal resembling the correlation structure of the DHM model extending into its high-frequency range. Since structuring of physiological signals may be due to self-similarity in the time or frequency domain, both methods are recommended to be used in evaluating the performance of fractal tools for the purpose of gaining a deeper understanding of their interactions with the prospective physiological signals (figure 14). Variants of the standard method (Fougere 1985, Eke et al 2000) yield different precisions in their fractal estimates (figure 14, columns 1 and 2). Employing signal preprocessing steps (such as parabolic windowing, endmatching) in a proper sequence and excluding the high frequencies from estimating the power slope as shown by Eke et al (2000) yields a variant of the PSD method that gives the best results in fractal analysis of DHM signals (figure 14, column 2,

R24

A Eke et al Table 2. Precision of Disp, DFA and lowPSDw,e methods as a function of signal lengtha. fGn

fBm

n

Disp

DFA

lowPSD

28

0.00422 0.00274 0.00180 0.00126 0.00092 0.00067 0.00050 0.00039 0.00031 0.00024 0.00020

0.00920 0.00457 0.00240 0.00131 0.00076 0.00045 0.00031 0.00024 0.00020 0.00018 0.00017

0.03998 0.01902 0.01039 0.00678 0.00515 0.00422 0.00374 0.00350 0.00338 0.00330 0.00325

29 210 211 212 213 214 215 216 217 218 a

w,e

bdSWV

DFA

lowPSD

0.00461 0.00249 0.00134 0.00084 0.00047 0.00033 0.00017 0.00010 0.00006 0.00004 0.00002

0.04164 0.02321 0.01457 0.01015 0.00807 0.00696 0.00631 0.00599 0.00587 0.00582 0.00577

0.09864 0.04397 0.01954 0.01003 0.00609 0.00454 0.00378 0.00358 0.00350 0.00347 0.00347

w,e

Mean squared errors are shown characterizing bias and variance in the fractal estimates of these methods.

panels 1 and 2) and is designated as the lowPSDw,e method. Although bdSWV, Disp and DFA proved more robust and precise than the PSD method, the latter is still recommended to be used as the basic method in fractal analysis for visualizing the power spectrum, demonstrating the presence of a proper scaling range and as a guide to deeper analysis (Eke et al 2000). We have seen that spectral methods with the exception of the variant identified by Eke et al (2000) are sensitive to high-frequency disturbances, while DFA, Disp and bdSWV are not, because these methods bin the high frequencies. Each point in DFA, Disp and bdSWV can be taken to be (roughly) equivalent to the average of a group of spectral estimates. Hence, DFA, Disp and bdSWV use fewer points in fitting for H than PSD for β. Discarding the largest and smallest bins further improves the precision of Disp and bdSWV (Cannon et al 1997). As seen in figure 15, even the best variant of the PSD method and two robust and precise methods, Disp and bdSWV, do show greatly varying performance. A precision of H = 0.05 can be expected in 100% of the cases for bdSWV if N > 213. However, this holds only for a restricted range of conditions (N > 215 and H < 0.7) when Disp is applied to fGn signals generated by the DHM algorithm. In this test lowPSDw,e is the weakest with a range of 100% performance for N > 214 and H > 0.2 but with much less precision for H’s < 0.2. The larger the H is, the longer the signal length should be for obtaining the same degree of precision in Hˆ as summarized in table 3 for cases of small H (H < 0.15) and large H (H> 0.8) signals and signals in between  these extremes analysed by four reliable methods low PSDw,e , Disp, bdSWV and DFA at a precision of 95%. Most reports of fractal analysis of various signals obtained from physiological measurements use a signal length of much less than that demonstrated by Bassingthwaighte et al as reliable (Bassingthwaighte and Raymond, 1995). Acceptable ranges are shown in figure 15 and table 3. It is still very common that results of analyses done on signals of a few hundred data points in length are reported. Estimating H in these cases is a meaningless exercise, especially if no class of the signal is reported that would at least aid the reader in finding out more on the precision of the estimate using results of evaluation studies (Bassingthwaighte and Raymond, 1994, 1995, Caccia et al 1997, Cannon et al 1997, Eke et al 2000). DFA, Disp and SWV are directly related methods (Raymond and Bassingthwaighte 1998). This also explains these methods’ comparably robust performance (figures 14 and 15) (Eke et al 2000) and reliable use in the dichotomous fGn/fBm model to evaluate the stationary and nonstationary fractal physiological processes (Eke et al 2000). Among the three variants

Fractal time series analysis

R25

fBm

DHM

Hurst coefficient, H

fGn

lowPSD w,e

Dispersional analysis

100

0.8

80

0.6

60

0.4

40

0.2

20

0.01

0

lowPSD w,e

bdSWV

0.99

100

0.8

80

0.6

60

0.4

40

0.2

20

0.01

0

SSM

Hurst coefficient, H

fGn

lowPSD w,e

fBm

%

0.99

Dispersional analysis

0.99

100

0.8

80

0.6

60

0.4

40

0.2

20 0

0.01

lowPSD w,e

bdSWV

0.99

100

0.8

80

0.6

60

0.4

40

0.2

20

0.01

28

210

212

214

216 218

28

210

212

214

216 218

0

Length, n Figure 15. Evaluation of the precision of an improved variant of the PSD method and two robust methods, Disp and bdSWV as a function of H and signal length. Signal generation was the same as described for figure 12 except for the following two aspects. Signal length varied from 28 to 218. True H was increased from 0.01 to 0.99 in steps of 0.01. The intensity-coded parameter is the per cent of estimates falling into the range of Htrue ± 0.05. Light areas designate the range of lengths and H’s where the tested methods’ performance is good.

R26

A Eke et al Table 3. Signal lengths at which Disp, 95%.

fGn

Disp lowPSD

fBm

w,e

DFA bdSWV lowPSD w,e DFA

lowPSD

w,e,

DFA and bdSWV method have a precision of

H < 0.15

0.15 < H < 0.8

H > 0.8

213

214

– 210 212 –

213

218 214 213 212 215 216

212 211 213 215

of the SWV method (standard, line detrended and bridge detrended), the bridge detrended method (bdSWV) by removing the ‘bridge’, the line between the first and last points within a window, results in the smallest bias and variance in its estimates of H (figure 14, panel bdSWV/DHM/fBm, table 2) (Eke et al 2000). Both versions of the CGSA analysis in order to undertake coarse graining have to calculate H by an independent method. Hurst’s rescaled range method was originally chosen, which in numerical testing proved quite weak (Bassingthwaighte and Raymond 1994). Precision of the CGSA method may be further decreased by the fact that it employs the standard periodogram method with a limited range of applicability (β  2) (Fougere 1985) (figure 14, panel PSD/DHM/fBm). Newer, more precise variants of the Fourier method applicable in the full range of the fGn/fBm model (−1  β  3), such as low PSDw,e eliminating the high-frequency range from the spectral estimates and using signal preprocessing steps like windowing and endmathching—in this order (Eke et al 2000)—should be considered in future improvements. Given the fact that the CGSA method is widely used in the analysis of heart rate variability, the impact of these circumstances needs to be looked at more closely. Classification of fractal signals always leaves a range of uncertainty where fractal noise signals cannot be separated. For a comparison of the SSC, low PSDw,e and DFA methods see figure 16 where the misclassification rate is shown as a function of signal length 28  N  218. In these tests SSM and DHM signals were generated with either ideal frequency or time domain power law scaling, respectively. For all lengths tested, SSC proved to be a very robust method for signal classification irrespective of the model used to generate the signal.

8. How to apply fractal methods to physiological signals? Application of the complex arsenal of fractal tools to the analysis of physiological temporal processes is inherently complex. The difficulty is due to a number of reasons, among which we find the mathematical aspects of the approach. Indeed, before application, all attempts should be made to decipher the algorithms into a code meaningful to the physiologist, a practice we have tried to follow in this review. The application itself is not a one-step procedure, like calculating a fractal measure (H, β or D, etc.) by a method chosen based on mere availability for a signal of a length determined by some arbitrary circumstance. It must have been clearly realized by now, that this would be a high-risk operation and should be abandoned irrespective of how common this practice still may be. A flowchart given in figure 17 gives a summary of the steps needed to be taken, from deciding on signal length and sampling rate when producing a time series to choosing the proper method for analysis (rounded boxes in figure 17). Another

Fractal time series analysis

R27 low

PSDw,e

DFA

SSC

Estimated β

DHM

−1

100 80

0

60 1 2

20

3

0

Estimated β

−1

SSM

%

40

100 80

0

60 1

%

40 2

20

3

0 8

13

18 n

Length, 2

8

13

18 n

Length, 2

8

13

18 n

Length, 2

Figure 16. Precision of signal classification. An improved variant of the PSD method (low PSDw,e ) (Eke et al 2000), the detrended fluctuation analysis (DFA) (Peng et al 1994), and the signal summation conversion method (Eke et al 2000) were compared on various signal lengths 28  N  218 in 11 steps. Fractal estimates by these methods were converted into β. A total of 19 800 monofractal DHM signals were generated at each signal length in increments of β = 0.02 for the whole range of the fGn/fBm model (−1  β  3). The misclassification rate was displayed intensity coded as a per cent in total trials at any β and N. low PSDw,e and DFA have a relatively wide range of uncertainty on the noise side of the 1/f boundary for the DHM signal. This range is much narrower for SSM signals due to the signals and methods sharing the same algorithmic bases (Eke et al 2000). SSC proved a very robust and precise method of signal classification on both DHM and SSM signals from short to long signals with a narrow range of uncertainty on the noise side of the 1/f boundary only.

flowchart focusing more on the technical details of the fractal analysis can be found elsewhere (Eke et al 2000). 8.1. Choose adequate signal length and sampling frequency Signal length and sampling frequency from the standpoint of fractal analysis are interrelated parameters, which fundamentally determine the outcome of the analysis (see figure 12). Not all frequency components are fully represented in the power spectrum of a series, xi, of a temporal signal, x(t) sampled at fs . The upper limit is the Nyquist frequency fN = fs /2. fs should be chosen high enough to minimize the artifactual influence of aliasing. When selecting fs , the sampling theorem of Claude Shannon should be observed (Taylor 1994) prescribing fs > 2fmax , where fmax is the Nyquist rate representing the upper bound of the highest possible frequencies in the signal. If fs is chosen as a magnitude larger than 2fmax , this condition is surely met. The higher the fs chosen, the lower the power at fN is found beyond which aliasing into the spectrum occurs. Increasing fs as a measure to treat aliasing cannot continue beyond some upper limit for exceeding it would increase the chance that high-frequency estimates in the power spectrum would not reflect physiology. This would not be much of a problem if the upper 75% of the spectral estimates were to be discarded as recommended (Eke et al 2000)

R28

A Eke et al

Create time series Apply descriptive statistics Isolate case of fractal Scaling range ≥ 2 decades PSD

fGn signal

Classification SSC

fBm signal

Determine H Disp

Class cannot be found

Determine H bdSWV

Use class-independent methods of analysis Determine H low

PSD w,e DFA

Figure 17. Flowchart of the strategy for fractal time series analysis. Steps from creating the time series from the temporal record of a physiological signal to calculating the Hurst coefficient are shown. Note the central importance of signal classification allowing the selection of one of the two robust methods of high precision for obtaining H. A more detailed flowchart can be found elsewhere (Eke et al 2000).

and if these physiologically irrelevant estimates would fall into the discarded range. They can, however, extend into ranges below the upper 75% if signal length, N, is chosen much too low relative to fs the latter already having been chosen high enough to minimize aliasing. N should be large if we want to capture low-frequency dynamics in our analysis. Since most signals in physiology are known to be dominated by low-frequency fluctuations, this interaction between fs and N needs to be fully understood. Their values should be chosen to meet the optimum between the need to capture relevant dynamics, to minimize aliasing, to sufficiently extend the frequency range into the slow dynamics known to be physiologically relevant and to contain the artifactual or physiologically irrelevant high-frequency components within the upper 75% of the spectral range. Exclusion of the upper 75% of the spectral estimates from the spectral analysis makes low PSDw,e less sensitive to high-frequency disturbance as compared to the standard version. Similarly, the robustness and precision of bdSWV, Disp and DFA at least in part stems from their weighting estimates in larger size bins (equivalent to lower frequencies) over those in short bins. By the same token, Disp’s weaker performance as compared to bdSWV in their respective signal class is most likely due to the comparable impact of high and low frequencies on the spectral slope tilting up or down from the horizontal position of the white noise, while the low frequencies always dominate the nonstationary fBm signal resulting in a slope tilted in one direction, only.

Fractal time series analysis

R29

8.2. Characterize measurement noise No measurement is without noise. In addition to the customary techniques to characterize noise in the signal, like signal-to-noise ratio, the PSD method is recommended to be used. The aim is to demonstrate that the signal contains only insignificant power due to instrument noise and signal contribution of some origin irrelevant to the study as compared to the fluctuations due to the sampled (relevant) physiological process within the range of frequencies of the fractal analysis. Instrument noise can be characterized in the power spectral density plots of the signal sampled from the instrument’s signal output when it is disconnected from the physiological signal source. The second aim can be achieved by optimizing the scaling range in such a way that it excludes signal components unrelated to the observed physiological process. Bassingthwaighte and Raymond (1995) found that adding random noise with SD equal to the SD of the fractal signal with H > 0.7 (strongly correlated fGn signal) caused less than a 10% downward error in the estimate of H. Cannon et al (1997) found the same for the fBm signals with added random motion. This effect can and should also be tested numerically in the experimental studies by finding SDnoise/SDsignal. In our studies analysing the fractal correlation in the spontaneous fluctuations of regional blood cell perfusion in the rat brain cortex (Eke et al 2000) we found SDnoise/SDsignal = 7.6% (unpublished data). When the pure motion signal of β calculated from the spectrum of instrument noise signal was added at an SD ratio of 7.6% to a pure motion signal of Hˆ fBm = 0.22 (Hˆ found for the blood cell perfusion time series of the fBm type), the downward error in the estimate of H was found to be only 2.2%. 8.3. Demonstrate the presence of fractal scaling range The PSD method is a key tool in demonstrating that power law scaling as predicted by equation (27) is present in the time domain in the signal throughout a continuous range of 2 decades of frequencies (Cannon et al 1997). Note that assessment of the scaling range is influenced by our choice of fs and N (figure 12). In the case of a shorter than 2 decades of scaling range, fs and N can be modified in an attempt to find the right combination that would identify the scaling range beyond SR = 2 decades. It is also important to determine the physiological meaning and significance of the upper and lower cut-off point (fmin and fmax ) of the scaling range. 8.4. Determine signal class Identifying the class of the signal according to the dichotomous fGn/fBm model is useful not only in signal analysis, but in interpreting the underlying physiology, too (Eke et al 2000). Typically the mechanism of signal generation in the studied physiological process is unknown, hence it is best to use robust methods, like SSC, for signal classification. The range of uncertainty for SSC is very small irrespective of whether the signal is generated by the SSM or the DHM model (figure 16). Even in cases, when the improved version of PSD is used and the signal character is known to be SSM, the misclassification rate of low PSDw,e shows a very strong dependence on signal length and it still falls behind that of SSC. Proper classification of the signal is a prerequisite to using the robust time domain methods (Disp and bdSWV) with improved precision. Failure to find the right class or, what is common in the literature, not using classification as a prelude to analysis, results in gross error in the estimates of H (figure 18).

R30

A Eke et al

Time series Blood cell perfusion

raw data

differenced data

680

60

580

0

480

0

Time, seconds

650

-60

0

Time, seconds

650

Signal Classification and Analysis by lowPSDw,e fBm

Power

1012

fGn

103

10-6

0

Frequency, Hz

100

0

Frequency, Hz

100

Signal Analysis by bdSWV Method raw class-compatible

class-incompatible

log SD(n)

3.2

2.5

1.8

2.5

log (n)

7

2.5

log (n)

7

Signal Analysis by Disp Method raw class-incompatible

class-compatible

log SD(n)

4

-1

-6

0

log (n)

10

0

log (n)

10

Figure 18. Exemplary analysis of blood cell perfusion time series obtained by LDF-flowmetry from the rat cerebral microcirculation with special emphasis on the importance of signal classification. The raw signal (first row) was classified as fBm by low PSDw,e , based on having an estimated spectral index, βˆ > 1 with all spectral estimates above 25 Hz (marked by dashed lines) excluded when fitting for βˆ (second raw). Therefore the best estimate of H can be obtained as the mean of H’s by low PSDw,e and bdSWV. The conclusion is that the raw signal is a fractal motion with H = 0.22. Disp cannot be applied to fBm or bdSWV to fGn signals, while low PSDw,e can be applied to both signal classes. Note that Disp does not give an estimate of H on a motion-type physiological signal (bottom panel) (Eke et al 2000).

Fractal time series analysis

R31

8.5. Choose the appropriate method for analysis What we can learn from the results of numerical experiments evaluating the performance of the fractal tools on the two classes of the fGn/fBm model and using the DHM and SSM signals (figures 14 and 15 and tables 2 and 3), is that there is no single recipe but instead there are choices we should be aware of. Signal classification by SSC is recommended as the first step, for the class found will identify our potential tools. If we have good reason to believe or better yet, we could prove that our real signal is very close in character to the ideal one generated by the SSM or DHM models, the low PSDw,e or the AC methods are likely to give very precise estimates of H, respectively. If this is not the case, it is better to use the average of estimates by robust methods, like Disp on noise and bdSWV on motion, and that of low PSDw,e (Eke et al 2000). If the signal class cannot be determined, we should settle for the less precise low PSDw,e and DFA methods which have the advantage of being applicable to both signal categories, keeping in mind that their estimates are biased on anticorrelated DHM/noise and motion, and on anticorrelated DHM/motion and SSM/noise signals, respectively. When DFA is compared to bdSWV, the latter is evidently more robust. Consequently, we should also take into account the actual value of H in further refining our analysis and select methods with the smallest bias and variance in the particular range of H. 9. Applications of fractal time series analysis in physiological research with emphasis on pitfalls of analysis Application of fractal approaches in biomedical research leads to new insight into the complex patterns of signals emerging from physiological systems. These studies have demonstrated that (i) behind the seemingly random fluctuations of a number of physiological parameters a special order, the structure of fractal geometry can be found; (ii) a single measure of this complex structure (one of the fractal measures) can describe the complexity of fluctuations; and (iii) that this parameter does change in response to disturbances of the system such as activity, ageing and disease. These steps were the first, and in cases reported before that the details of fractal theory, and the prerequisites of its application to real, physiological systems in particular were to be outlined, and possible sources of error identified. This latter point will be emphasized in the brief survey below on the application of fractal time series in physiological research. In the last 15 years more than a thousand studies have been published using the fractal approach to analyse fluctuations in physiological signals, aimed at extracting useful information from the seemingly random excursions of the observed parameter around the set-point of the controlled system. Concise studies can also be found (West 1990, Bassingthwaighte et al 1994, Nonnenmacher et al 1994) that give an excellent overview on the fractals discovered in living structures and processes. Following a brief overview of the areas where fractal time series analysis has set foot in biomedicine, we focus on two areas in particular: applications relating to cerebral blood flow fluctuations—including studies of our own—and heart rate variations, the latter being responsible for over half of the fractal time series publications to date. Posture of our body is maintained by a complex processing of the continuous influx of signals (from the proprioceptors located in muscles and joints, receptors in the vestibular system and those in the retina of the eye) producing a continuous efflux of neural impulses from several centres in the central nervous system via the motor fibres innervating the skeletal muscles. It has been demonstrated that the random wobbling around a stable position has a fractal character (Riley et al 1997, Peterka 2000) and that the rhythm in normal and abnormal gait is also a fractal process (Hausdorff et al 1999, Hausdorff et al 2000.).

R32

A Eke et al

Rhythm of normal breathing pattern is controlled by a complex interaction of chemical and neural parameters. The partial pressure of carbon dioxide gas in the blood is monitored by central chemoreceptors and by the extent of inflation of the lungs by mechanoreceptors. Neuronal circuits receiving additional inputs from internal rhythm generators as well as higher centres in the central nervous system evaluate these signals. The resulting neural outflow from this control system generates a normal breathing pattern that fluctuates over time in a fractal, self-similar manner (Szeto et al 1992, Dirksen et al 1998, Frey et al 1998). When the layout of DNA sequences is converted into a one-dimensional series, DFA analysis—typically used in time series analysis—can be applied in its fractal characterization. Interestingly, these ‘DNA-landscapes’ proved fractal, suggesting that the biological code may have aspects using long-range fractal correlation patterns along the gene (Peng et al 1994, Arn´eodo et al 1998, Yu et al 2000). Such extended coding seems a likely possibility, since coding in triplet sequences was demonstrated to be more important than in sequences of the bases (Arn´eodo et al 1998). The functional integrity of our brain depends on an uninterrupted supply of blood flow through its regions and microregions. The ultrasonic transcranial Doppler (TCD) and the laserDoppler (LDF-flowmetry) methods have proved especially useful in the continuous monitoring of blood flow (indeed one of its components) in large arteries supplying the brain or through regions of the brain cortex. Fractal analysis has been applied quite early on to continuous records of blood cell velocity in large vessels supplying the brain as monitored by the TCD-method (Rossitti and Stephensen 1994, Blaber et al 1996, Zhang et al 2000). These velocity time series, though proved to be fractals, might require reevaluation for two reasons. First, none of the signals analysed was longer than 512 points, which make the results suspect of unacceptable error in their fractal estimates. Second, when published fractal estimates are expressed in β, they do seem to be in serious conflict (∼0.52 (Rossitti and Stephensen 1994), ∼1.66 (Blaber et al 1996), ∼0.34 (Zhang et al 2000)) because they fall at a considerable distance on the two sides of the 1/f boundary of the fGn/fBm model. The only possible explanation for this extreme scatter of β is that in these studies the signal class was not determined, not even when a class-specific method of analysis (i.e. Disp) was applied. As shown earlier, this leads to grave error in estimating H (figure 18). Typically instead of H ∼ 0.3 (β ∼ 1.6) identifying an fBm signal with anticorrelated increments, H ∼ 0.8 (β ∼ 0.6) is found, mistakenly indicating a correlated fGn signal. We have used LDF-flowmetry to obtain sufficiently long (655 s, 217 data points) time series of blood cell perfusion from the brain cortex of anesthetized rats (Eke et al 1997, 2000). The time constant of the LDF-instrument was set to its minimum in order to capture the fastest possible dynamics in the signal. fs was chosen at 200 Hz to observe the sampling theorem and avoid aliasing, setting the upper bound of the fastest dynamics to 20 Hz when our low PSDw,e is used. All of the perfusion time series we have analysed so far proved fractal motion (fBm) signals, hence we chose the bdSWV method to analyse these signals. Their fractal estimates scatter in a narrow range around H ∼ 0.23 (β ∼ 1.46). Since its first description (Hon and Lee 1965), heart rate variability (HRV) has attracted much attention in the cardiovascular literature due to the vital functions of the heart, and also because the discovered irregular but apparently normal pattern challenged the notion of reliability associated with the regularly beating normal heart. Descriptive statistics and frequency decomposition (spectral) techniques have been used extensively in the analysis of different aspects of HRV and were standardized in 1996 (European Society of Cardiology and North American Society of Pacing and Electrophysiology 1996). The advance of fractal and chaotic approaches in HRV research has been quite rapid in the last decades; however, the standards for selecting the proper methods and conditions for

Fractal time series analysis

R33

measurement and analysis have not yet been established. A task force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology (Task Force) (European Society of Cardiology and North American Society of Pacing and Electrophysiology 1996) gave guidelines for the use of spectral analytical methods in system identification, that was most certainly regarded as a reference for setting the scale in fractal HRV studies also. The task force recommended two scales to be used: a shorter, approximately 5-min period, and a substantially longer 24-h period. The report specified the length as 512–1024 and 218 data elements, respectively. The most often used fractal tools in HRV analyses are PSD, DFA and CGSA. Signal classification is not required for these methods; it is embedded in PSD and CGSA, and DFA can handle fGn and fBm signals, too. Nevertheless, investigators should realize that reporting on the class of the signal provides valuable information on the signal character and can facilitate expanding the studies into the direction of using other robust methods of high precision (Disp and bdSWV) for the purpose of confirming the fractal measures found, especially if differential changes are small. Comparison of results from different investigations is difficult, because (i) fractal methods perform increasingly better as signal length increases, with N = 8192 being the point of transition (figure 15, DFA on fBm/DHM signals); (ii) comparison of fractal measures found on different scales is questionable because the evaluated spectra represent different scales of frequencies rendering the scaling range vastly different in short and long record fractal analyses (figure 12); and (iii) role of the motion effects as described by Fortrat et al (1999) and Serrador et al (1999) can further complicate the issue. Control values of the spectral exponent (β) vary greatly from 0.8 to 1.9 depending on signal length and methods of analysis (table 4). Note that these values are scattered around the fGn/fBm boundary (table 4). The uncertainty in the estimates of the spectral index can also stem from the greatly different scaling ranges involved in these studies. The longer the record the more low-frequency components get represented (figure 12). It has been shown that highfrequency components of HRV are influenced by the vagal tone and the ratio of low to high frequencies changes due to sympathetic control or with the sympatho-vagal balance (European Society of Cardiology and North American Society of Pacing and Electrophysiology 1996), while the ultra-low-frequency components are related to motion effects (Serrador et al 1999). To ensure that the high frequencies are properly represented in the spectra of short-term records, the task force recommends that the length of the record should be ten times the wavelength of the lower frequency bound of the investigated component (European Society of Cardiology and North American Society of Pacing and Electrophysiology 1996). This rule applies only to short-term records when the standard PSD method is used for frequency decomposition and the analysis is focused on separating peaks. Records of this length, however, are not suitable for fractal analysis by PSD or for that matter any other fractal method (Bassingthwaighte and Raymond, 1995). Results of numerical testing of different fractal tools (Bassingthwaighte and Raymond, 1995, Caccia et al 1997, Cannon et al 1997, Eke et al 2000) can serve as the basis for estimating the length of the signal that would be adequate for capturing a specific aspect of HRV within a given scaling range. The clinical benefits resulting from fractal HRV analysis can be best evaluated from surveying the data in the literature from studies on HRV in ageing (Pikkuj¨ams¨a et al 1999, Sakata et al 1999), disease states (Peng et al 1995b, Bigger et al 1996, Lombardi et al 1996, Butler et al 1997, M¨akikallio et al 1997, 1998, 1999, Huikuri et al 2000), and survival rate following heart attack (Huikuri et al 1998, 2000, M¨akikallio et al 1999). The long-range correlation in HRV increases with age irrespective of the method and the scale used (from β ∼ 1 of adolescent to β ∼ 1.3 in the elderly) (Pikkuj¨ams¨a et al 1999, Sakata et al 1999). Ischemic

R34

A Eke et al Table 4. Fractal estimates of human heart rate variability in control conditions.a

Row

Method

Length

Subjects

Reference

1

PSD

218

274

2 3

PSD

218

30

Bigger et al (1996) Hayano et al (1997)

4

CGSA

214

5

5

PSD

24 h

24

6 7 8 9 10 11

PSD CGSA

24 h 24 h

29 23

Aoyagi et al (2000) Huikuri et al (1999) Pikkuj¨ams¨a et al (1999) Sakata et al (1999)

CGSA CGSA

1024 1024

8 9

Fortrat et al (1998) Fortrat et al (1999)

Group Mean

6

13

CGSA

512

8

14 15 16 17 18

CGSA

512

9

Yamamoto et al (1992) Hoshikawa and Yamamoto (1997) Togo and Yamamoto (2000)

400 400

6 7

Lucy et al (2000) Lucy et al (2000) Mean of 16,17

19 20

CGSA CGSA

256 256

11 20

21

CGSA

256

14

22

CGSA

256

14

23 24 25

CGSA CGSA

256 256

14 9

1.120 ± 0.190 0.800 ± 0.130 1.060 ± 0.236 1.010 ± 0.120 1.120 ± 0.070 1.065 ± 0.078 1.900

1.50%

1.63% 22.50%

1.290 ± 0.370

Mean of 12–14 CGSA CGSA

4.50%

1.260 ± 0.150

Mean of 9,10 10 min

0.940 ± 0.020 1.000 ± 0.020 1.180 ± 0.050

Mean of 4–7

CGSA



1.060 ± 0.120

Mean of 1, 2

12

β¯

Blaber et al (1996) Butler et al (1997) Hughson et al (1995) Hughson et al (1995) Hughson et al (1995) Satoh et al (1999)

1.150 ± 0.330 1.220 ± 0.099 1.020 ± 0.200 1.000 ± 0.230 1.010 ± 0.014

5.50%

0.25%

1.450 ± 0.100 1.150 ± 0.060 1.160 ± 0.090 1.090 ± 0.080

Mean of 19–24

1.130 ± 0.090 0.890 ± 0.060 1.145 ± 0.180

3.62%

a β¯ is the mean estimate of the spectral indices, β is the differential change in β expressed as per cent of the full range of the fGn/fBm model (−1 < β < 3) relative to the value of β in line 3 where the estimate is the most precise due to the longest signal length.

and cardiomyopathic conditions (Peng et al 1995b, Bigger et al 1996, Lombardi et al 1996, Butler et al 1997, M¨akikallio et al 1997, 1998, 1999, Huikuri et al 2000) also increase the longrange correlations in HRV as compared to control populations. The changes in β are, however, very moderate (3.5 ± 3.6%) and values of β are within the transitional range between fractal noises and motions. Given these circumstances, and the performance of the PSD method, as tested on DHM signals, these estimates should be considered weak unless the HRV signals mimic those of the SSM model (figure 15). In this case the PSD method should be regarded as very precise, hence in spite of the small changes found in β the conclusion on an increased longterm correlation could be much firmer; a possibility we will investigate in depth in the future. The heart is known to function under a strong autonomic influence. Some of these influences have earlier been identified in HRV studies and these have recently been confirmed by studies

Fractal time series analysis

R35

with this influence being removed, for example, following heart transplantation (Hughson et al 1995, Bigger et al 1996) (from β ∼ 1 to β ∼ 1.5–2) or autonomic failure (from β ∼ 1.4 to β ∼ 1.6) (Blaber et al 1996). The correlation seem to increase (differential change in β = 9.1 ± 9.55%) both in the spectra of short (Hughson et al 1995, Blaber et al 1996) and long (Bigger et al 1996) HRV signals. We should remember that short signal length decreases the reliability of fractal estimates, which is particularly problematic since autonomic influence has been demonstrated to effect the high frequency range that is dominant in the spectra of short signals. The only long-range study (Bigger et al 1996) demonstrates the importance of autonomic control in providing long-term adaptability of the cardiac rhythm. In survival studies the DFA and the PSD methods yield adverse results (Huikuri et al 2000). PSD suggests that increased correlation decreases survival rate, while DFA suggests the opposite. However, the differences obtained using DFA are very small, in one case insignificant. When the current state of our understanding of the prerequisites for a reliable fractal time series analysis outlined in this review is applied to the results of fractal HRV studies, we may conclude that they seem influenced by methodological factors due to short signal lengths and vastly different scaling ranges. Further discrimination and refinement of the fractal estimates of human HRV would be possible if signal classification was part of the analysis followed by the application of class-specific methods offering robust and more precise estimates of H. Further work is needed to characterize the performance of the methods widely used by researchers in the HRV field on ideal fractal signals produced by different models and also on real physiological signals. These studies could then help in confirming the notion of a proposed shift in HRV dynamics to a more correlated state in diseased states or ageing (Peng et al 1995a). 10. Future trends and closing remarks 10.1. Need to identify faulty data in literature The potential of monofractal analysis is far from being fully exploited, since it was only very recently that attention has been drawn to the need for a thorough testing of the monofractal tools (Bassingthwaighte and Raymond, 1994, 1995, Caccia et al 1997, Cannon et al 1997, Eke et al 2000), and a much deeper understanding of the nature of the physiological signals and their interactions with the fractal methods used for their analysis. The central importance of signal classification in reliable fractal time series analysis has only been recently identified (Eke et al 2000). Most of the published data in the literature, has resulted from earlier studies unaware of these issues and hence their revision for potentially being in error seems warranted. A likely possibility is that where the published fractal measure indicates the presence of a strong correlation in the signal for which the dispersional method gave H > 0.8, while in reality the signal analysed by the PSD or signal summation conversion method would have proven an fBm for which the proper choice, the bridge trended scaled windowed variance method would have yielded an H ≈ 0.3, would result in a completely different conclusion being drawn on the correlation structure of the signal: i.e. a strong anticorrelation of the increments (figure 18) (Rossitti and Stephensen 1994). 10.2. Stepping beyond monofractal analysis Monofractal analysis can give a much deeper insight into the internal make-up of physiological signals than traditional approaches like descriptive statistics or a simple frequency decomposition approach by Fourier analysis. Monofractal analysis has already shown that the self-similar, fractal, 1/f -type patterns seems ubiquitous in nature including

R36

A Eke et al

many biological phenomena. Some considerations imply that the universality of 1/f -type patterns in nature could be explained theoretically (Schlesinger 1987): 1/f noises have a similar role among stochastic processes (i.e. dependent data series), like normal distributions have among other distributions of independent datasets. Other authors assume stochastic feedback mechanisms as being behind physiological 1/f noises (Ivanov et al 1998). The first steps have already been taken to explore these signals further by showing whether their selfsimilarity is indeed isotropic, a fundamental property of monofractals. One should anticipate that physiological processes could, in principle, produce even more complex, anisotropic, multifractal fluctuations of which the monofractal analysis can only give a measure of the largest of their fractal dimensions. One of the first physiological applications of multifractal tools (Ivanov et al 1999) has demonstrated that this is a real possibility as it reveals multifractal dynamics in human HRV time series. Acknowledgments The computer program ‘FracTool’ and ‘HRVtool’ running on the Macintosh, PC, and Unix platforms for estimating the Hurst coefficient of physiological time series is available on the Web at http://www.elet2.sote.hu/eke/FRACTPHYS/. These programs can be freely distributed as long as the ‘readme’ file is included and the use of the programs is properly acknowledged. This study was supported by OTKA grants I/3 2040, T 016953, T 034122, NIH Grants TW00442 and RR1243. References Aoyagi N, Ohashi K, Tomono S and Yamamoto Y 2000 Temporal contribution of body movement to very long-term heart rate variability in humans Am. J. Physiol. (Heart Circ. Physiol.) 278 H1035–41 Arn´eodo A, Bacry E and Muzy J 1995 The thermodynamics of fractals revisited with wavelets Physica A 213 232–75 Arn´eodo A, d’Aubenton-Carafa Y, Audit B, Bacry E, Muzy J and Thermes C 1998 Nucleotide composition effects on the long-range correlations in human genes Eur. Phys. J. B 1 259–63 Avnir D, Biham O, Lidar D and Malcai O 1998 Is the geometry of nature fractal? Science 279 39–40 Bassingthwaighte J B 1988 Physiological heterogeneity: fractals link determinism and randomness in structures and functions NIPS 3 5–10 Bassingthwaighte J B and Beyer R P 1991 Fractal correlation in heterogeneous systems Physica D 53 71–84 Bassingthwaighte J B and Raymond G M 1994 Evaluating rescaled range analysis for time series Ann. Biomed. Eng. 22 432–44 Bassingthwaighte J B and Raymond G M 1995 Evaluation of the dispersional analysis method for fractal time series Ann. Biomed. Eng. 23 491–505 Bassingthwaighte J, Liebovitch L and West B 1994 Fractal Physiology (New York: Oxford University Press) Beran J 1994 Statistics for Long-Memory Processes (New York: Chapman and Hall) Bigger J J, Steinman R, Rolnitzky L, Fleiss J, Albrecht P and Cohen R 1996 Power law behavior of RR-interval variability in healthy middle-aged persons, patients with recent acute myocardial infarction, and patients with heart transplants Circulation 93 2142–51 Blaber A P, Bondar R L and Freeman R 1996 Coarse graining spectral analysis of HR and BP variability in patients with autonomic failure Am. J. Physiol. 271 H1555–64 Butler G C, Ando S and Floras J S 1997 Fractal component of variability of heart rate and systolic blood pressure in congestive heart failure Clin. Sci. Colch. 92 543–50 Caccia D C, Percival D, Cannon M J, Raymond G and Bassingthwaighte J B 1997 Analyzing exact fractal time series. Evaluating dispersional analysis and rescaled range methods Physica A 246 609–32 Cannon M J, Percival D B, Caccia D C, Raymond G M and Bassingthwaighte J B 1997 Evaluating scaled windowed variance methods for estimating the Hurst coefficient of time series Physica A 241 606–26 Davies R B and Harte D S 1987 Test for Hurst effect Biometrika 74 95–101 Dirksen A, Holstein-Rathlou N-H, Madsen F, Skovgaard L, Ulrik C, Heckscher T and Kok-Jensen A 1998 Long-range correlations of serial FEV1 measurements in emphysematous patients and normal subjects J. Appl. Physiol. 85 259–65

Fractal time series analysis

R37

Eke A, Herm´an P, Bassingthwaighte J B, Raymond G M, Cannon M, Balla I and Ikr´enyi C 1997 Temporal fluctuations in regional red blood cell flux in the rat brain cortex is a fractal process Adv. Exp. Med. Biol. 428 703–9 Eke A, Herm´an P, Bassingthwaighte J B, Raymond G M, Percival D B, Cannon M, Balla I and Ikr´enyi C 2000 Physiological time series: distinguishing fractal noises from motions Pfl¨ugers Arch.—Eur. J. Physiol. 439 403–15 European Society of Cardiology and North American Society of Pacing and Electrophysiology 1996 Heart rate variability: standards of measurement, physiological interpretation and clinical use. Task force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology Circulation 93 1043–65 Falconer K 1990 Fractal geometry: mathematical foundations and applications (New York: Wiley) Fortrat J O, Formet C, Frutoso J and Gharib C 1999 Even slight movements disturb analysis of cardiovascular dynamics Am. J. Physiol. 277 H261–7 Fortrat J-O, Nasr O, Duvareille M and Gharib C 1998 Human cardivascular variability, baroreflex and hormonal adaptations to a blood donation Clin. Sci. 95 269–75 Fougere P F 1985 On the accuracy of spectrum analysis of red noise processes using maximum entropy and periodogram methods: simulation studies and application to geophysical data J. Geophys. Res. 90 43–66 Frey U, Silverman M, Barab´asi A and Suki B 1998 Irregularities and power law distributions in the breathing pattern in preterm and term infants J. Appl. Physiol. 86 789–97 Gabor D 1946 Theory of Communication J. Inst. Electr. Eng. 93 429–57 Glass L 2001 Synchronization and rhythmic processes in physiology Nature 410 277–84 Goodwin B 1997 Temporal organisation and disorganization in organisms Chronobiol. Int. 14 531–6 Hausdorff J, Lertratanakul A, Cudkowicz M, Peterson A, Kaliton D and Goldberger A 2000 Dynamic markers of altered gait rhythm in amyotrophic lateral sclerosis J. Appl. Physiol. 88 2045–53 Hausdorff J M, Zemany L, Peng C-K and Goldberger A L 1999 Maturation of gait dynamics: stride-to-stride variability and its temporal organization in children J. Appl. Physiol. 86 1040–7 Hayano J, Yamasaki F, Sakata S, Okada A, Mukai S and Fujinami T 1997 Spectral characteristics of ventricular response to atrial fibrillation Am. J. Physiol. 273 H2811–6 Herm´an P, Kocsis L and Eke A 2001 Fractal branching pattern in the pial vasculature in the cat J. Cereb. Blood Flow Metab. 21 741–54 Hon E H and Lee S T 1965 Electronic evaluations of the fetal heart rate patterns preceding fetal death: further observations Am. J. Obstet. Gynecol. 87 817–26 Hoshikawa Y and Yamamoto Y 1997 Effects of Stroop color-word conflict test on the autonomic nervous system responses Am. J. Physiol. 272 H1113–21 Hughson R L, Maillet A, Dureau G, Yamamoto Y and Gharib C 1995 Spectral analysis of blood pressure variability in heart transplant patients Hypertension 25 643–50 Huikuri H V, Makikallio T H, Airaksinen K E, Seppanen T, Puukka P, Raiha I J and Sourander L B 1998 Power-law relationship of heart rate variability as a predictor of mortality in the elderly Circulation 97 2031–6 Huikuri H V, M¨akikallio T H, Peng C K, Goldberger A L, Hintze U and Moller M 2000 Fractal correlation properties of R-R interval dynamics and mortality in patients with depressed left ventricular function after an acute myocardial infarction [see comments] Circulation 101 47–53 Huikuri H V, Poutiainen A M, M¨akikallio T H, Koistinen M J, Airaksinen K E, Mitrani R D, Myerburg R J and Castellanos A 1999 Dynamic behavior and autonomic regulation of ectopic atrial pacemakers Circulation 100 1416–22 Hurst H 1951 Long-term storage capacity of reservoirs Trans. Am. Soc. Civ. Eng. 116 770–808 Ivanov P, Amaral L, Goldberger A, Havlin S, Rosenblum M, Struzik Z and Stanley H 1999 Multifractality in human heartbeat dynamics Nature 6 461–5 Ivanov P, Amaral L, Goldberger A and Stanley H 1998 Stochastic feedback and the resolution of biological rhytms Europhys. Lett. 43 363–8 Jones C, Lonergan G and Mainwaring D 1996 Wavelet packet computation of the Hurst exponent J. Phys. A: Gen. Phys. 29 2509–27 Lombardi F, Sandrone G, Mortara A, Torzillo D, La Rovere M T, Signorini M G, Cerutti S and Malliani A 1996 Linear and nonlinear dynamics of heart rate variability after acute myocardial infarction with normal and reduced left ventricular ejection fraction Am. J. Cardiol. 77 1283–8 Lucy S D, Hughson R L, Kowalchuk J M, Paterson D H and Cunningham D A 2000 Body position and cardiac dynamic and chronotropic responses to steady-state isocapnic hypoxaemia in humans Exp. Physiol. 85 227–37 M¨akikallio T H, Høiber S, Køber L, Torp-Pedersen C, Peng C K, Goldberger A L and Huikuri H V 1999 Fractal analysis of heart rate dynamics as a predictor of mortality in patients with depressed left ventricular function after acute myocardial infarction Am. J. Cardiol. 83 836–9

R38

A Eke et al

M¨akikallio T H, Ristim¨ae T, Airaksinen K E, Peng C K, Goldberger A L and Huikuri H V 1998 Heart rate dynamics in patients with stable angina pectoris and utility of fractal and complexity measures Am. J. Cardiol. 81 27–31 M¨akikallio T H, Sepp¨anen T, Airaksinen K E, Koistinen J, Tulppo M P, Peng C K, Goldberger A L and Huikuri H V 1997 Dynamic analysis of heart rate may predict subsequent ventricular tachycardia after myocardial infarction Am. J. Cardiol. 80 779–83 Mandelbrot B 1982 The Fractal Geometry of Nature (New York: W H Freeman) Mandelbrot B 1985 Self-affine fractals and fractal dimension Phys. Scripta 32 257–60 Mandelbrot B B and Wallis J R 1969 Some long-run properties of geophysiological records Water Resources Research 5 321–40 Nonnenmacher T, Losa G and Weibel E 1994 Fractals in Biology and in Medicine (Basel: Birkh¨auser) Peitgen H-O, J¨urgens H and Saupe D 1992 Chaos and Fractals. New Frontiers of Science (New York: Springer) Peng C K, Havlin S, Hausdorff J M, Mietus J E, Stanley H E and Goldberger A L 1995a Fractal mechanisms and heart rate dynamics. Long-range correlations and their breakdown with disease J. Electrocardiol. 28 59–65 Peng C-K, Buldyrev S, Havlin S, Simons M, Stanley H and Goldberger A 1994 Mosaic organization of DNA nucleotids Phys. Rev. E 49 1685–9 Peng C-K, Havlin S, Stanley H and Goldberger A 1995b Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series Chaos 5 82–7 Peterka R 2000 Postural control model interpretation of stabilogram diffusion analysis Biol. Cybern. 82 335–43 Pikkuj¨ams¨a S M, M¨akikallio T H, Sourander L B, R¨aih¨a I J, Puukka P, Skytt¨a J, Peng C K, Goldberger A L and Huikuri H V 1999 Cardiac interbeat interval dynamics from childhood to senescence: comparison of conventional and new measures based on fractals and chaos theory Circulation 100 393–9 Raymond G M and Bassingthwaighte J B 1998 Deriving dispersional and scaled windowed variance analyses using the correlation function of discrete fractional Gaussian noise Physica A 265 85–96 Riley M A, Wong S, Mitra S and Turvey M T 1997 Common effects of touch and vision on postural parameters Exp. Brain Res. 117 165–70 Rossitti S and Stephensen H 1994 Temporal heterogeneity of the blood flow velocity at the middle cerebral artery in the normal human characterized by fractal analysis Acta. Phys. Scan. 151 191–8 Sakata S, Hayano J, Mukai S, Okada A and Fujinami T 1999 Aging and spectral characteristics of the nonharmonic component of 24-h heart rate variability Am. J. Physiol. 276 R1724–31 Satoh K, Koh J and Kosaka Y 1999 Effects of nitroglycerin on fractal features of short-term heart rate and blodd pressure variability J. Anesth. 13 71–6 Schepers H E, van Beek J H G M and Bassingthwaighte J B 1992 Four methods to estimate the fractal dimension from self-affine signals IEEE Eng. Med. Biol. 11 57–71 Schlesinger M F 1987 Fractal time and 1/f noise in complex systems Ann. N.Y. Acad. Sci. 504 214–28 Serrador J M, Finlayson H C and Hughson R L 1999 Physical activity is a major contributor to the ultra low frequency components of heart rate variability Heart (Br. Card. Soc. Online) 82 e9 Simonsen I and Hansen A 1998 Determination of the Hurst Exponent by use of wavelet transforms Phys. Rev. E 58 79–87 Szeto H H, Cheng P Y, Decena J A, Cheng Y, Wu D L and Dwyer G 1992 Fractal properties in fetal breathing dynamics Am. J. Physiol. 263 R141–7 Taylor F 1994 Principles of Signals and Systems (McGraw-Hill Series in Electrical and Computer Engineering. Communications and Signal Processing) (New York: McGraw Hill) Togo F and Yamamoto Y 2000 Decreased fractal component of human heart rate variability during non-REM sleep Am. J. Physiol. (Heart Circ. Physiol.) 280 H17–21 van Beek J, Roger S and Bassigthwaigthe J 1989 Regional myocardial flow heterogeneity explained with fractal networks Am. J. Physiol. 257 1670–80 Weitkunat R e 1991 Digital Biosignal Processing (Amsterdam: Elsevier) West B 1990 Fractal Physiology and Chaos in Medicine (Singapore: World Scientific) Yamamoto Y and Hughson R L 1991 Coarse-gaining spectral analysis: new method for studying heart rate variability J. Appl. Physiol. 71 1143–50 Yamamoto Y and Hughson R L 1993 Extracting fractal components from time series Physica D 68 250–64 Yamamoto Y, Hughson R L and Nakamura Y 1992 Autonomic nervous system responses to exercise in relation to ventilatory threshold Chest 101 206s–10s Yu Z, Anh V and Wang B 2000 Correlation property of length sequences based on global structure of the complete genome Phys. Rev. E 63 1–8 Zhang R, Zuckerman J and Levine B 2000 Spontaneous fluctuations in cerebral blood flow: insights from extendedduration recordings in human Am. J. Physiol. (Heart Circ. Physiol.) 278 H1848–55

Suggest Documents