General Smoothing Formulas for Markov-Modulated ... - CiteSeerX

9 downloads 0 Views 363KB Size Report
ELLIOTT AND MALCOLM: GENERAL SMOOTHING FORMULAS FOR ..... Verlag, 1995. [16] T. Lindvall, “Weak convergence of probability measures and random.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005

1123

General Smoothing Formulas for Markov-Modulated Poisson Observations Robert J. Elliott and W. P. Malcolm

Abstract—In this paper, we compute general smoothing dynamics for partially observed dynamical systems generating Poisson observations. We consider two model classes, each Markov modulated Poisson processes, whose stochastic intensities depend upon the state of an unobserved Markov process. In one model class, the hidden state process is a continuously-valued Itô process, which gives rise to a continuous sample-path stochastic intensity. In the other model class, the hidden state process is a continuous-time Markov chain, giving rise to a pure jump stochastic intensity. To compute filtered estimates of state process, we establish dynamics, whose solutions are unnormalized marginal probabilities; however, these dynamics include Lebesgue–Stieltjes stochastic integrals. By adapting the transformation techniques introduced by J. M. C. Clark, we compute filter dynamics which do not include these stochastic integrals. To construct smoothers, we exploit a duality between our forward and backward transformed dynamics and thereby completely avoid the technical complexities of backward evolving stochastic integral equations. The general smoother dynamics we present can readily be applied to specific smoothing algorithms, referred to in the literature as: Fixed point smoothing, fixed lag smoothing and fixed interval smoothing. It is shown that there is a clear motivation to compute smoothers via transformation techniques similar to those presented by J. M. C. Clark, that is, our smoothers are easily obtained without recourse to two sided stochastic integration. A computer simulation is included. Index Terms—Filtering, martingales, Poisson processes, reference probability, smoothing.

I. INTRODUCTION

C

URRENTLY, there is increasing interest in estimation using counting process observations. For example, counting processes arise quite naturally in optical signal processing, from the arrival processes generated from email traffic and most recently in quantitative finance [31]. The filtering problem, for (Poisson) counting process observations, was first introduced by Snyder in [26]. Snyder’s paper presented a stochastic integral equation, whose solution is the normalized conditional probability of the state, given an observation history. The model considered was a continuous sample path

Manuscript received March 21, 2002; revised April 2, 2004 and February 14, 2005. Recommended by Associate Editor C. D. Charalambous. The National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology, and the Arts and the Australian Research Council through Backing Australia’s Ability and the ICT Center of Excellence Program. R. J. Elliott is with the Haskayne School of Business, the University of Calgary, Calgary, AB T2N 1N4, Canada (e-mail: [email protected]). W. P. Malcolm is with the National ICT Australia (NICTA), Canberra ACT 2601 (e-mail: [email protected]). Digital Object Identifier 10.1109/TAC.2005.852565

stochastic intensity influenced by a diffusion process. To develop his equation, Snyder first appealed to the Bartlett–Moyal theorem [21], computing a stochastic integral equation satisfied by the conditional characteristic function of the state, given the observation history. Taking the inverse Fourier transform of these dynamics, one recovers the so called Snyder equation. In [25], a different approach was taken to compute filters for counting process observations, depending primarily upon the innovations method. One potential numerical problem with the filters presented in [26] and [25] is the appearance of the reciprocal of the estimated intensity in the filter dynamics. This can indeed cause numerical difficulties. For example, a small magnitude for the estimated intensity can cause instability. Further, these dynamics exclude the possibility of the intensity taking the value zero. Moreover, since all smoothers depend upon their corresponding filters, these reciprocal terms appear again in the smoothers developed in both [27] and [25]. For this reason (and others), the smoothers in [27] and [25] are somewhat difficult to implement on a digital computer. To address these technical difficulties, we propose new smoothing schemes which contain no reciprocal terms, have linear dynamics and do not include stochastic integrations. Further, our filters and smoothers are easily implemented on a digital computer. The approach we take is a combination of the method of reference probability and an adaptation of the transformation technique presented in [3]. Consequently the observation processes appear in our new dynamics as random parameters, rather than as stochastic integrators. To compute smoothers, we exploit a duality between forward (in time) dynamics and backward (in time) dynamics, thereby eliminating the need to consider two-sided stochastic calculus. This technique was first presented in [18], then subsequently in [11]. This paper is organized as follows. In Section II, we define dynamical models for the Markov state processes and the Poisson observation processes. The method of reference probability is briefly described and Radon–Nikodym derivatives are defined. In Section III, we briefly recall some basic results in filtering with counting process observations. New filters and smoothers are computed in Sections IV and V for, respectively, continuous stochastic sample path intensities and pure jump stochastic sample path intensities. In Section VI, we compute discretization upper limits for filters and smoothers with pure jump stochastic sample path intensities. Finally, some numerical examples are given in Section VII. II. SIGNAL MODELS AND REFERENCE PROBABILITY All processes are defined on the probability space

0018-9286/$20.00 © 2005 IEEE

.

1124

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005

A. Continuously Valued State Process Models

D. Reference Probability

Here, we consider a scalar-valued1 Markov state process . The process is a solution of the stochastic integral equation

Using change of measure techniques is in effect a version of Bayes’ Theorem. We suppose that under a reference measure , the observation process is a canonical Poisson process with unit intensity and is independent of the state process . To define the “real world” probability , we set

(1) Here, denotes a drift function such that (1) has a strong is a standard solution. The process Wiener process.

(10) Dynamics for the process , when satisfies (3), are, respectively

satisfies (1), and when

B. Discrete State Process Models Here, the signal process is a Markov process taking values in the discrete state–space (11) .. .

.. .

.. . (2) (12)

The dynamics for this Markov process are given by (3) where

is a martingale and

Remark II.1: To compute our smoothers we will make use , where, for example of the factorization

is a rate matrix. (13)

C. Observation Processes is observed We suppose that either the state process or through a univariate Poisson process , whose stochastic intensities are defined, respectively, by

Notation: Suppose process and we wish to estimate Bayes’ rule [6]

is any adapted . Using a form of

(14)

(4) where

, or (5)

where and The Doob–Meyer decompositions for

. have the form

continuously valued finite valued Here, & are Our filtrations are given by

(6)

-martingales.

III. EXTANT FILTER DYNAMICS To provide some historical perspective and to give an example of the technicalities arising in counting process state estimation, we first recall some early fundamental results. We start with the Snyder equation. Theorem 1 (Snyder, 1972): Suppose a process satisfies the dynamics (1) and a Poisson process is observed whose stochastic intensity is given by (15)

where

(7)

where

(8)

where

(9)

maps the space of to . The normalized probaHere, bility of the state process, conditioned upon an observation history, satisfies the stochastic integral equation

and

Here, is a continuously valued process and is a finite state process. Note that, in the sequel, it will be clear by context as to which class of model is being considered. 1The

cesses.

contributions of this article are routinely extended to vector valued pro-

(16) denotes the forward Kolmogorov partial differential Here, operator for the diffusion process and the quantity is a conditional mean estimate .

ELLIOTT AND MALCOLM: GENERAL SMOOTHING FORMULAS FOR MARKOV-MODULATED POISSON OBSERVATIONS

It is clear that the dynamics at (16) have a singularity at , further, the difficulty of these nonlinear dynamics is compounded by the inclusion of in the martingale term. In a subsequent paper (see [27]), Snyder developed smoother dynamics for all three classes of smoother, fixed point, fixed lag and fixed interval, based upon the model just described. However, every smoother depends upon its corresponding filter, and in this case the complexities in the dynamics at (16), were inherited by each of the smoothers in [27]. A similar situation occurs if one considers a filter (for the same model), developed via the innovations method. In this scenario the state estimation filter has dynamics

Then, the process

1125

has dynamics

(19) Here,

denotes the partial differential operator (20)

A proof of Theorem 2 is given in the Appendix. B. Robust Filters

(17) Here, . Clearly, these dynamics are complicated and are not amenable to implementation on a digital computer. Further, computing smoothers based on these dynamics compounds the technical difficulties yet again (see [25]). Other filters for counting process observations have appeared in the literature; for example, see [2], [10], [14], [24], [28], and [29]. The reciprocal terms in the dynamics at (16) and (17) can be eliminated by choosing to work with unnormalized probabilities. This approach has been taken, for example, in [14]. However, all counting process filters in the literature, (for the models we consider), include stochastic integrals. This means that to compute smoothers, where the corresponding filter includes stochastic integration, will require backward stochastic integrals and the resulting smoothers will necessarily include stochastic integrals. An example of this approach can be found in [24]. The main contribution of this paper is to first develop filters which do not include stochastic integrals, then to use these filters tocomputeour smoothers.Usingthisapproach,oneeasily obtains smoothing schemes without recourse to the two-sided calculus and the resulting smoothing schemes have numerical benefits. Remark III.1: A convention in the literature for continuous time filters not including stochastic integrals, is to refer to such filters as robust. In this context, the meaning of the term robust filter, is a filter which evaluates an expectation whose dependence upon the observation sample path is continuous in some sense.

In this section, we eliminate the stochastic integral in the dynamics (19), by adapting the transformation technique presented in [3]. For what follows, we will need a general form of the Itô rule. Theorem 3 (Itô–Doleans–Dade–Meyer): Suppose is a local semimartingale in and , is a twice continuously differentiable function. is again a semimartingale, with Doob–Meyer decomposition

(21) Here, denotes the continuous component of the process A proof of Theorem 3 is given in [6]. We introduce the scalar-valued function

(22) The process

satisfies the stochastic integral equation (23)

Lemma 1: The process equation

IV. NEW SMOOTHING FILTERS (CONTINUOUS-STATE PROCESSES)

.

satisfies the stochastic integral

A. Filter Dynamics Theorem 2: Suppose the process satisfies dynamics given by (1). Suppose a Poisson process is observed whose intensity model has the form , with . Further, suppose there exists and unnormalized density function , such that

(24) Proof: To establish the dynamics (24), we apply the Itô formula to the following processes: (25)

(18)

(26)

1126

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005

Note that

is a solution of (33). Solutions of this equation are That is, unique so the result follows. (27) (28) (29)

Now, consider the difference term Write , then

.

C. Smoothing Scheme and suppose In smoothing, we consider any time the observations are given up to time . We wish to compute for any measurable test function . By the Bayes’ Theorem (36) Now

(37) (30)

. As is a Markov

Consider the expectation process, this equals

Combining the terms at (27)–(30), we see (38)

(31) For

, write

Consider the nonstochastic parabolic partial differential equation in (32) which deEquation (32) has a unique continuous solution pends upon the observation process only through the function . Theorem 4: The unique solution of the nonlinear stochastic integral equation

(39) Theorem *: The function (40) is independent of time. Proof:

(33) is given by (34) Proof: To establish Theorem 4 we apply the product rule . As involves no stochastic integral, no to the process Itô correction term appears. Set . Noting that and , are continuous functions of time, it follows that

(as

has independent increments under

) (41)

However, follows. Recall that Define

is independent of

and the result

.

(42) Then (35)

(43)

ELLIOTT AND MALCOLM: GENERAL SMOOTHING FORMULAS FOR MARKOV-MODULATED POISSON OBSERVATIONS

Corollary 1: The process equation

satisfies the backward parabolic

1127

(by Pardoux [24]) (51)

(44) and the result is proved. with the terminal condition Proof:

. D. Discrete-Time Smoothers To compute practical forms of the dynamics given by (32) and (44), we compute time domain discretizations of these dy, with times namics. Consider a partition on the interval

Here, Write

, where

for some

. (52)

The nonstochastic parabolic PDE at (32) may be written as (45) (53) Therefore (46) is a backward parabolic equation for with coefficients parameterised by the observations . The process has terminal confor all , so . dition Theorem 6: Suppose a Poisson process , has an intensity (47) . Here, the process where satisfies the dynamics given by (1). For any conditional smoothed density of given

At two sampling instants, approximation:

and

, we make the following

(54) To simplify our notation, write quantity . Recalling the definition of

, for any time dependent , we see that

, the is (55)

(48)

Similarly, one can compute discrete-time dynamics for the process , these dynamics are

Proof: For any measurable test function

(56)

(49) Finally, noting the equalities , it follows that

As in the proof of Theorem 5

and that

(50) by definition of Similarly

and . (57) To compute estimated smoothed probabilities on an observation partition , the recursive dynamics (55) and (56), can be used to evaluate the right-hand side of (57). Remark IV.1: The discrete-time dynamics given at (55) and (56) provide a means by which smoothed estimates may be calculated without stochastic integrations. Using these dynamics,

1128

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005

one must also discretize a compact set in and apply a numerical technique to approximate the partial derivatives arising from and . the operators,

Consider the numerator of (64)

V. NEW SMOOTHING FILTERS (DISCRETE STATE PROCESSES) A. Filter Dynamics

(65)

Theorem 7: Suppose the process satisfies dynamics given by (3). Suppose a Poisson process is observed whose intensity model has the form (58) The unnormalized probability vector isfies the stochastic integral equation

The inner expectation in the last line is (66) because is a Markov process under the process , where

. In particular, consider (67)

, satand Note As

. . , from (65), we wish to determine

(59) A proof of Theorem 7 is given in Appendix II. To determine the corresponding normalized filter probability for the dynamics at (59), one computes, for example (60)

B. Robust Filters Definition 1: Define a matrix-valued stochastic process , where (61)

(as the is increments)

-measurable and under

has independent

with, . It was shown in [17], that the transformed process , satisfies the linear ordinary differential equation (62) The importance of this result is that filtered estimates of the state process can be obtained without recourse to stochastic integration with respect to the Poisson process . This can also be an advantage numerically (see [19]). Lemma 2: The quantity (63)

(68) Further, the normalized smoothed estimate of

is then (69)

The incorporates the extra information obtained from observations between and . Now, from (65) and (69)

defines a locally Skorokhod continuous version of the expecta. tion Lemma 2 is established in [19]. C. Smoothing Scheme For smoothed state estimates, we wish to evaluate the expecwhere . tation By Bayes’ rule (64)

(70) The calculation at (70) shows that the processes hence, , are independent of time, so

and,

(71)

ELLIOTT AND MALCOLM: GENERAL SMOOTHING FORMULAS FOR MARKOV-MODULATED POISSON OBSERVATIONS

Theorem 8: Suppose the process satisfies the dynamics given by (3). Suppose the Poisson process , has an intensity

Write (79) (80)

(72) and where the conditional probability of given by

. For any , given the observations

, , is

1129

Writing (75) as (81) we can approximate

(82) (73) Here, satisfies the (forward evolving) linear ordinary differential equation (74) and satisfies the (backward evolving) linear ordinary differential equation (75) Proof: To prove Theorem 8, we wish to establish dynamics for the process . What we must do, is find a process , such that the following duality holds: for all

, discrete-time dynamics for are recovRecalling that ered by multiplying the previous equation (on the left) by , resulting in (83) Similarly, one can compute discrete time dynamics for the forward filter process , where as (84) Recalling (73) in Theorem 8, we see that the estimated smoother probabilities are computed directly from the two recursions (84) now appear as and (83). We stress again that the observations and , rather than integrators parameters in the matrices in martingale terms of equations such as (116).

(76) VI. DISCRETIZATION LIMITS

From (76) and the fact that the process [see (71)], we see that

is time invariant

In this section, we demonstrate the numerical benefits of filter and smoother dynamics which do not include stochastic integrations. We start with a filter for a Poisson intensity model whose value depend upon the state of a continuous-time Markov chain. To denote the mesh of the time partition, we write (85)

(77) , it follows that

Therefore, since

(78) Further, satisfies the terminal condition The result follows by noting that

so

.

.

D. Discrete-Time Smoother To obtain an approximation for , we discretize the dynamics for the processes and over an increasing partition of as before.

Due to the nature of Markov modulated Poisson processes, it is rarely the case that one can consider a regular partition in time when discretising filters, as one must preserve the counting process property, that is, at most one jump event can occur in any subinterval within a partition. Because of this constraint, it is quite natural to consider irregular time partitions, so it is also natural to consider the question of how large any time step might be. However, some care is needed in discretising continuous time stochastic integral equations. In particular, it is known that continuous time filters can be discretized in such a way as to produce meaningless negative probabilities; see, for example, [15, p. 448]. Definition 2: A discrete-time recursion for an unnormalized , is said probability vector , where to be stable on a partition , if for each and , the following inequality holds: for each (86)

1130

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005

Theorem 9: The recursion for the -process given at (84), , is stable in the sense of Defiwhen defined on a partition , is chosen to satisfy nition 2, provided a maximum grid step the upper bound (87) Remark VI.1: Note that the upper bound given by (87) is deterministic, that is, it does not depend upon any observation sample path. Proof: Consider the -th component of the vector . Without loss of generality we take for each . Recalling the dynamics at (84), we see that

(88)

Corollary 2: The recursion for the -process given at (83), , is stable in the sense of Defiwhen defined on a partition nition 2, provided a maximum grid step is chosen to satisfy the upper bound given at (87). The proof of Corollary 2 is much the same as that for Theorem 9 and so is omitted. Given that both the dynamics at (84) and (83) meet the stability criterion for the same upper bound, it is immediate that the estimated smoother probabilities also remain nonnegative, provided this bound is observed. VII. EXAMPLE The smoothers presented in this paper are general and can be readily configured for standard special cases, such as fixed point, fixed lag and fixed interval smoothers. To demonstrate the benefit of smoothing over filtering, we consider a pure jumpstochastic Poisson intensity model and a fixed interval smoother. and the rate The individual Poisson intensities are: matrix has the form (94)

Here, . The stability condition given in Definition 2, requires that the left-hand side of (88) remain nonnegative, that is

(89)

Simplifying this inequality, we get

In Fig. 1, we show three subfigures. In the first subfigure, a realization of the hidden Markov process is shown. It is this process that determines the Poisson intensities, where the chain in state 0 results in a Poisson intensity of 10, and the chain in state 1 results in a Poisson intensity of 20. The remaining two subfigures in Fig. 1 show the robust filter’s estimated probabilities; and . In Fig. 2, we show the corresponding fixed interval smoother for the same example. VIII. CONCLUSION

(90) Since the off diagonal elements of the matrix are always nonnegative, the term concerning these elements in (90) is always nonnegative, that is (91)

In this paper, new filters and smoothers have been presented for systems with Markov modulated Poisson process observations. These filters do not include stochastic integrations. The techniques used were the method of reference probability and an adaptation of the transformation techniques presented in [3]. It has been shown that by using this approach that smoothers are easily computed without recourse to backward evolving stochastic integrals. This is achieved by exploiting a duality between forward evolving and backward evolving nonstochastic dynamics.

So, to ensure the inequality at (90) is satisfied, we need only choose , such that the quantity is nonnegative, that is (92) The corresponding global upper limit for the maximum grid step , is, therefore (93)

APPENDIX I PROOF OF THEOREM 2 There are two parts to the proof of this theorem. In the first part (part a), we establish dynamics for a measure-valued process for a test function depending upon the state process . In the second part of the proof (part b), we compute the den. sity process corresponding to the process A. Part a Definition 3: Let variable such that

be an arbitrary function of the state . The unnormalized condi-

ELLIOTT AND MALCOLM: GENERAL SMOOTHING FORMULAS FOR MARKOV-MODULATED POISSON OBSERVATIONS

1131

Fig. 1. Filter-based estimated state probabilities.

Fig. 2.

Smoother-based estimated state probabilities.

tional measure of is defined as

given

, under the reference measure

(95)

We first compute the form of the semimartingale differential generator for the Itô diffusion (1) is

. The

(96)

1132

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005

where, for simplicity, we take Using the Itô rule for

Given the differential

to be a scalar valued process.

(97)

has independent increments under this measure, so the and for , tell us nothing about future increments of and by the integrand. Hence, we may equally condition on definition

[2], and (97),

(105)

is

For the existence of the result of (103) it is assumed that (106)

(98)

Similar arguments explain the remaining expectations. Combining the expectations (100)–(103) gives the stochastic integral equation

From these calculations we have the semimartingale form of , so all that is left to do, following [30], is to evaluate the five conditional expectations on the right-hand side of (99) (107)

B. Part b In part (a), we computed a stochastic integral equation for a . Suppose the measure measure valued process, namely is given by a density so that (99)

(108)

(100)

For the implementation of filtering equations it proves more convenient to work with stochastic differential equations for the corresponding density processes. Since the (107) has been derived, one can write down the corresponding equation for the density process

Individually, these expectations are

(101) (102)

(103) To explain the results of these expectations, it is sufficient to consider one example. In (103), the stochastic integral on the left-hand side can be approximated in the usual manner by a Riemann sum. Noting that is known in (the -algebra contains all the available information about ), then the expectation can be taken inside the integral

(109) denotes the formal adjoint of the operator , (the form Here, of is given in (96) of the Appendix). Further, since is an arbitrary test function, it follows from (109) that the unnormalized density process is given by the stochastic integral equation

(110) This expectation is performed under the reference measure Now

.

(104)

APPENDIX II PROOF OF THEOREM 7 We wish to estimate

given the observations

of

.

ELLIOTT AND MALCOLM: GENERAL SMOOTHING FORMULAS FOR MARKOV-MODULATED POISSON OBSERVATIONS

By Bayes’ rule (111) Note So

, and

.

(112) That is, if we write (113) then (114) To compute the expectation at (113), we first apply the product rule to determine the decomposition for the process

(115) By conditioning both sides of (115) on under the reference , it then follows that the process has the dyprobability namics

(116)

ACKNOWLEDGMENT The second author would like to acknowledge the support of the National ICT Australia and of Dr. D. J. Daley for many helpful technical discussions concerning doubly stochastic Poisson processes. REFERENCES [1] P. Billingsley, Convergence of Probability Measures, 2nd ed. New York: Wiley, 1999. [2] P. Bremaud, Point Processes and Queues, Martingale Dynamics, ser. Springer Series in Statistics. New York: Springer-Verlag, 1981. [3] J. M. C. Clark, “The design of robust approximations to the stochastic differential equations for nonlinear filtering,” in Communications Systems and Random Process Theory, J. K. Skwirzynski, Ed. Amsterdam, The Netherlands: Sijthoff and Noorhoff, 1978, pp. 721–734.

1133

[4] D. Clements and B. D. O. Anderson, “A nonlinear fixed-lag smoother for finite-state markov processes,” IEEE Trans. Inf. Theory, vol. 21, no. 4, pp. 446–452, Jul. 1975. [5] M. H. A. Davis, “On a multiplicative functional transformation arising in nonlinear filtering theory,” Zeitschrift fuer Wahrscheinlichkeitstheorie venvandete Gebiete, vol. 54, pp. 125–139, 1980. [6] R. J. Elliott, Stochastic Calculus and Its Applications. New York: Springer-Verlag, 1982. , “New finite dimensional filters and smoothers for noisily observed [7] markov chains,” IEEE Trans. Inf. Theory, vol. 39, no. 1, pp. 265–271, Jan. 1993. [8] R. J. Elliott, L. Aggoun, and J. B. Moore, Hidden Markov Models Estimation and Control, ser. Applications of Mathematics. New York: Springer-Verlag, 1995. [9] G. Golub and C. van Loan, Matrix Computations. Baltimore, MD: John Hopkins Univ. Press, 1996. [10] R. J. Elliott, “Filtering and control for point process observations,” in Recent Advances in Stochastic Analysis, J. Barras and V. Mirelli, Eds. New York: Springer-Verlag, 1990, pp. 1–26. [11] R. J. Elliott and W. P. Malcolm, “Robust smoother dynamics for poisson processes driven by an itô diffusion,” in Proc. IEEE Conf. Decision and Control, vol. 1, Orlando, FL, Dec. 2001, pp. 376–381. [12] J. Jacod and A. N. Shiryaev, Limit Theorems for Stochastic Processes, ser. Grundlehren der Mathematischen Wissenschaften. Berlin, Germany: Springer-Verlag, 1997. [13] M. R. James, V. Krishnamurthy, and F. L. Gland, “Time discretization for continuous time filters and smoothers for hmm parameter estimation,” IEEE Trans. Inf. Theory, vol. 42, no. 2, pp. 596–605, Mar. 1996. [14] W. Kliemann, W. Koch, and F. Marchetti, “On the unnormalized solution of the filtering problem with counting process observations,” IEEE Trans. Inf. Theory, vol. 36, no. 6, pp. 1415–1425, Nov. 1990. [15] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, ser. Applications of Mathematics. New York: SpringerVerlag, 1995. [16] T. Lindvall, “Weak convergence of probability measures and random functions in the function space d[0; inf ),” J. Appl. Probab., no. 10, pp. 109–121, Nov. 1973. [17] W. P. Malcolm, Robust Filtering and Estimation With Poisson Observations. Canberra, Australia: Australian National Univ., 1999. [18] W. P. Malcolm and R. J. Elliott, “A general smoothing equation for poisson observations,” in Proc. IEEE Conf. Decision and Control, Phoenix, AZ, Dec. 1999, pp. 4106–4110. [19] W. P. Malcolm, R. J. Elliott, and J. van der Hoek, “On the numerical stability of time-discretised state estimation via clark transformations,” in Proc. IEEE Conf. Decision and Control, Maui, HI, Dec. 2003, pp. 1406–1412. [20] C. Moler and C. V. Loan, “Nineteen dubious ways to compute the matrix exponential, twenty-five years later,” SIAM Rev., vol. 45, no. 1, pp. 1–39, 2003. [21] J. Moyal, “Stochastic processes and statistical physics,” in Proc. Royal Statistical Soc., 1949, pp. 150–210. [22] E. Pardoux, “Equations du filtrage nonlineare de la predictions et du lissage,” Stochastics, no. 6, pp. 193–231, 1982. [23] E. Pardoux and P. Protter, “A two sided stochastic integral and its calculus,” Probab. Theory Related Fields, no. 76, pp. 15–49, 1987. [24] Filtering of a Diffusion Process With Poisson Type Observation, ser. Lecture Notes in Control and Information Sciences, pp. 510–518, 1977. [25] M. H. A. Davis, A. Segall, and T. Kailath, “Nonlinear filtering with counting observations,” IEEE Trans. Inf. Theory, vol. 21, no. 2, pp. 143–149, Mar. 1975. [26] D. L. Snyder, “Filtering and detection for doubly stochastic poisson processes,” IEEE Trans. Inf. Theory, vol. IT-18, no. 1, pp. 91–102, Jan. 1972. [27] , “Smoothing for doubly stochastic poisson processes,” IEEE Trans. Inf. Theory, vol. AC-18, no. 5, pp. 558–562, Sep. 1972. [28] J. H. van Schuppen, “Filtering prediction and smoothing for counting process observations, a martingale approach,” SIAM J. Appl. Math., vol. 32, no. 2, pp. 552–570, 1977. [29] D. R. Shin and E. I. Verriest, “The zakai-type formulas for counting process observations,” presented at the Conf. Information Sciences and Systems, Princeton, NJ, 1990. [30] E. Wong and B. Hajek, Stochastic Processes in Engineering Systems. New York: Springer-Verlag, 1985. [31] Y. Zeng, “A partially observed model for micromovement of asset prices with bayes estimation via filtering,” Math. Finance, vol. 13, no. 3, pp. 411–444, Jul. 2003.

1134

Robert Elliott received the B.S. and M.S. from Oxford University, Oxford, U.K., in 1961 and 1965, respectively, and the Ph.D. and D.Sc. degrees from Cambridge University, Cambridge, U.K., in 1965 and 1983, respectively. He has held positions at the University of Newcastle, U.K., Yale University, New Haven, CT, Oxford University, U.K., Warwick University, U.K., the University of Hull, U.K., the University of Alberta, Canada, and visiting positions in Toronto, Canada, Northwestern University, Evansville, IL, the University of Kentucky, Lexington, Brown University, Providence, RI, the Universite de Paris VI, the Universite de Paris, IX, Paris, France, the Technical University of Denmark, Lyngby, the Hong Kong University of Science and Technology, the Australian National University, Canberra, and the University of Adelaide, Australia. Currently, he is the RBC Financial Group Professor of Finance at the University of Calgary, Calgary, AB, Canada, where he is also an Adjunct Professor in both the Department of Mathematics and the Department of Electrical Engineering. He has authored nine books and over 300 papers. His book with P. E. Kopp Mathematics of Financial Markets (New York: Springer-Verlag, 1999) has been reprinted three times. The Hungarian edition was published in 2000 and a second edition appeared in November 2004. Springer-Verlag has contracted to publish his book Binomial Methods in Finance written with J. van der Hoek. He has also worked in signal processing and has published the books Hidden Markov Models: Estimation and Control (with L. Aggoun and J. Moore) (New York: Springer-Verlag, 1995, reprinted 1997) and Measure and Filtering (with L. Aggoun) (Cambridge, U.K.: Cambridge Univ. Press, 2004). He consults for banks and energy companies in Calgary.

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 8, AUGUST 2005

W. P. Malcolm received the B.S. degree in applied physics from the Royal Melbourne Institute of Technology, Melbourne, Australia, in 1989, the M.S. degree in applied mathematics from the Flinders University of South Australia, in 1995, and the Ph.D. degree in electrical engineering from the Australian National University, Canberra, in 2000. The title of his Ph.D. dissertation was “Robust Filtering and Estimation with Poissori Observations,” and was supervised by Prof. R. J. Elliott, Dr. D. J. Daley, and Prof. M. R. James. From July 2003 until July 2004, he worked as a postdoctoral fellow with Prof. Robert Elliott at the Haskayne School of Business, the University of Calgary, Calgary, AB, Canada. Since August 2004, he has been with the National ICT, Canberra, Australia. His research interests include filtering, estimation, stochastic control, and measures of risk in decision making with uncertainty; in particular, filtering and stochastic control with point process observations and marked point process observations.