Interpolation Problem for Stationary Sequences ... - Semantic Scholar

1 downloads 0 Views 169KB Size Report
The paper by Ulf Grenander [12] was the first one where this approach to extrapolation problem for stationary processes was proposed. Several models of ...
STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 3, September 2015, pp 259–275. Published online in International Academic Press (www.IAPress.org)

Interpolation Problem for Stationary Sequences with Missing Observations Mikhail Moklyachuk ∗ , Maria Sidei Department of Probability Theory, Statistics and Actuarial Mathematics,Taras Shevchenko National University of Kyiv, Ukraine Received: 4 July 2015; Accepted: 16 July 2015 Editor: David G. Yu

Abstract

The problem of the mean-square optimal estimation of the linear functional As ξ =

s−1 ∑ Ml +N ∑ l+1 l=0

Ml =

l ∑

(Nk + Kk ),

a(j)ξ(j),

j=Ml

N0 = K0 = 0, which depends on the unknown values of a stochastic stationary sequence ξ(j),

k=0

j ∈ Z from observations of the sequence at points of time j ∈ Z\S , S =

s−1 ∪

{Ml , Ml + 1, . . . , Ml + Nl+1 } is considered.

l=0

Formulas for calculating the mean-square error and the spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty, where the spectral density of the sequence ξ(j) is exactly known. The minimax (robust) method of estimation is applied in the case where the spectral density is not known exactly, but sets of admissible spectral densities are given. Formulas that determine the least favourable spectral densities and the minimax spectral characteristics are derived for some special sets of admissible densities. Keywords Stationary sequence, mean square error, minimax-robust estimate, least favourable spectral density, minimax spectral characteristic

AMS 2010 subject classifications. Primary: 60G10, 60G25, 60G35, Secondary: 62M20, 93E10, 93E11 DOI: 10.19139/soic.v3i3.149

1. Introduction The problem of estimation of the unknown values of stochastic processes is of constant interest in the theory of stochastic processes. The formulation of the interpolation, extrapolation and filtering problems for stationary stochastic sequences with known spectral densities and reducing them to the corresponding problems of the theory of functions belongs to A. N. Kolmogorov [18]. Effective methods of solution of the estimation problems for stationary stochastic sequences and processes were developed by N. Wiener [37] and A. M. Yaglom [38, 39]. Further results are presented in the books by Yu. A. Rozanov [34] and E. J. Hannan [13]. The crucial assumption of most of the methods developed for estimating the unobserved values of stochastic processes is that the spectral densities of the involved stochastic processes are exactly known. However, in practice complete information on the spectral densities is impossible in most cases. In this situation one finds parametric or nonparametric estimate of the unknown spectral density and then apply one of the traditional estimation methods provided that the selected density is the true one. This procedure can result in significant increasing of the value of error as K. S. Vastola and H. V. Poor [36] have demonstrated with the help of some examples. To avoid this effect one can search the estimates ∗ Correspondence

to: Mikhail Moklyachuk (Email: [email protected]). Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, Volodymyrska 64 Str., Kyiv 01601, Ukraine.

ISSN 2310-5070 (online) ISSN 2311-004X (print) c 2015 International Academic Press Copyright ⃝

260

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

which are optimal for all densities from a certain class of admissible spectral densities. These estimates are called minimax since they minimize the maximum value of the error. The paper by Ulf Grenander [12] was the first one where this approach to extrapolation problem for stationary processes was proposed. Several models of spectral uncertainty and minimax-robust methods of data processing can be found in the survey paper by S. A. Kassam and H. V. Poor [17]. J. Franke [8], J. Franke and H. V. Poor [9] investigated the minimax extrapolation and filtering problems for stationary sequences with the help of convex optimization methods. This approach makes it possible to find equations that determine the least favorable spectral densities for different classes of densities. In the papers by M. P. Moklyachuk [26] - [29] the problems of extrapolation, interpolation and filtering for functionals which depend on the unknown values of stationary processes and sequences are investigated. The estimation problems for functionals which depend on the unknown values of multivariate stationary stochastic processes is the aim of the book by M. Moklyachuk and O. Masytka [30]. I. I. Dubovets’ka, O.Yu. Masyutka and M.P. Moklyachuk[3], I. I. Dubovets’ka and M. P. Moklyachuk [4] - [7], I. I. Golichenko and M. P. Moklyachuk [11] investigate the interpolation, extrapolation and filtering problems for periodically correlated stochastic sequences. The paper by M. M. Luz and M. P. Moklyachuk [20] - [24] deals with the estimation problems for functionals which depend on the unknown values of stochastic sequences with stationary increments. Prediction of stationary processes with missing observations is investigated in papers by P. Bondon [1, 2], Y. Kasahara, M. Pourahmadi and A. Inoue [16, 31]. In this article we consider the problem of the mean-square optimal estimation of the linear functional s−1 l ∑ Ml +N ∑ l+1 ∑ As ξ = a(j)ξ(j), Ml = (Nk + Kk ), N0 = K0 = 0, which depends on the unknown values of a l=0

j=Ml

k=0

stochastic stationary sequence ξ(j), j ∈ Z, from observations of the sequence at points of time j ∈ Z\S , where s−1 ∪ S= {Ml , Ml + 1, . . . , Ml + Nl+1 }. The problem is investigated in the case of spectral certainty, where the l=0

spectral density is exactly known, and in the case of spectral uncertainty, where the spectral density is unknown, but a class of admissible spectral densities is given.

2. The classical Hilbert space projection method of linear interpolation Let ξ(j), j ∈ Z, be a (wide sense) stationary stochastic sequence. We will consider ξ(j) as elements of the Hilbert space H = L2 (Ω, F, P ) of complex valued random variables with zero first moment, Eξ = 0, finite second moment, E|ξ|2 < ∞, and the inner product (ξ, η) = Eξη . The correlation function R(k) = (ξ(j + k), ξ(j)) = Eξ(j + k)ξ(j) of the stationary stochastic sequence ξ(j), j ∈ Z, admits the spectral decomposition [10] ∫π eikλ F (dλ),

R(k) = −π

where F (dλ) is the spectral measure of the sequence. We will consider stationary stochastic sequences with absolutely continuous spectral measures and the correlation functions of the form 1 R(k) = 2π

∫π eikλ f (λ)dλ, −π

where f (λ) is the spectral density function of the sequence ξ(j) that satisfies the minimality condition ∫π

f −1 (λ)dλ < ∞.

(1)

−π

This condition is necessary and sufficient in order that the error-free interpolation of unknown values of the sequence is impossible [34]. Stat., Optim. Inf. Comput. Vol. 3, September 2015

M. MOKLYACHUK AND M. SIDEI

261

The stationary stochastic sequence ξ(j), j ∈ Z, admits the spectral decomposition [10, 15] ∫π eijλ Z(dλ),

ξ(j) =

(2)

−π

where Z(∆) is the orthogonal stochastic measure of the sequence such that ∫ 1 f (λ)dλ. EZ(∆1 )Z(∆2 ) = F (∆1 ∩ ∆2 ) = 2π ∆1 ∩∆2 Consider the problem of the mean-square optimal estimation of the linear functional As ξ =

+Nl+1 s−1 Ml∑ ∑ l=0

a(j)ξ(j), Ml =

j=Ml

l ∑ (Nk + Kk ), N0 = K0 = 0, k=0

which depends on the unknown values of a stochastic stationary sequence ξ(j), j ∈ Z, from observations of the s−1 ∪ sequence at points of time j ∈ Z\S , where S = {Ml , Ml + 1, . . . , Ml + Nl+1 }. l=0

It follows from the spectral decomposition (2) of the sequence ξ(j) that we can represent the functional As ξ in the following form ∫π As ξ = As (eiλ )Z(dλ), (3) −π

where iλ

As (e ) =

+Nl+1 s−1 Ml∑ ∑ l=0

a(j)eijλ .

j=Ml

Denote by H (ξ) the subspace of the Hilbert space H = L2 (Ω, F, P ) generated by elements {ξ(j) : j ∈ Z\S}. Let L2 (f ) be the Hilbert space of complex-valued functions that are square-integrable with respect to the measure whose density is f (λ). Denote by Ls2 (f ) the subspace of L2 (f ) generated by functions {eijλ , j ∈ Z\S}. The mean square optimal linear estimate Aˆs ξ of the functional As ξ from observations of the sequence ξ(j) at points of time j ∈ Z\S is an element of the H s (ξ). It can be represented in the form s

∫π Aˆs ξ =

h(eiλ )Z(dλ),

(4)

−π

where h(eiλ ) ∈ Ls2 (f ) is the spectral characteristic of the estimate Aˆs ξ. The mean square error ∆(h; f ) of the estimate Aˆs ξ is given by the formula ∫π 2 1 ˆ As (eiλ ) − h(eiλ ) 2 f (λ)dλ. ∆(h; f ) = E As ξ − As ξ = 2π −π

The Hilbert space projection method proposed by A. N. Kolmogorov [18] makes it possible to find the spectral characteristic h(eiλ ) and the mean square error ∆(h; f ) of the optimal linear estimate of the functional As ξ in the case where the spectral density f (λ) of the sequence is exactly known and the minimality condition (1) is satisfied. The spectral characteristic can be found from the following conditions: 1)h(eiλ ) ∈ Ls2 (f ), 2)As (eiλ ) − h(eiλ )⊥ Ls2 (f ). Stat., Optim. Inf. Comput. Vol. 3, September 2015

262

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

It follows from the second condition that for any η ∈ H s (ξ) the following equations should be satisfied ( ) As ξ − Aˆs ξ, η = E(As ξ − Aˆs ξ)η = 0 The last relation is equivalent to equations E(As ξ − Aˆs ξ)ξ k = 0, k ∈ Z\S.

By using representations (3), (4) and definition of the inner product in H we get  E

∫π

(

) As (eiλ ) − h(eiλ ) Z(dλ) ·

−π

∫π =

∫π

 e−ikλ Z(dλ) =

−π

(

) As (eiλ ) − h(eiλ ) f (λ)e−ikλ dλ = 0, k ∈ Z\S.

−π

It follows from this condition that the function (As (eiλ ) − h(eiλ ))f (λ) is of the form (As (eiλ ) − h(eiλ ))f (λ) = Cs (eiλ ),

Cs (eiλ ) =

+Nl+1 s−1 Ml∑ ∑ l=0

c(j)eijλ ,

j=Ml

where c(j), j ∈ S are unknown coefficients that we have to find. From the last relation we deduce that the spectral characteristic h(eiλ ) of the optimal linear estimate of the functional As ξ is of the form h(eiλ ) = As (eiλ ) − Cs (eiλ )f −1 (λ).

(5)

To find equations for determination the unknown coefficients c(j), j ∈ S , we use the decomposition of the function f −1 (λ) into the Fourier series f −1 (λ) =

∞ ∑

rm eimλ ,

(6)

m=−∞

where rm are the Fourier coefficients of the function f −1 (λ). Inserting (6) into (5) we obtain the following representation of the spectral characteristic iλ

h(e ) =

( s−1 Ml +Nl+1 ∑ ∑ l=0

) a(j)e

j=Ml

ijλ



( s−1 Ml +Nl+1 ∑ ∑ l=0

j=Ml

)( c(j)e

ijλ

∞ ∑

) rm e

imλ

.

(7)

m=−∞

It follows from the first condition, h(eiλ ) ∈ Ls2 (f ), that the Fourier coefficients of the function h(eiλ ) are equal to zero for j ∈ S , namely ∫π h(eiλ )e−ijλ dλ = 0, j ∈ S. −π

Stat., Optim. Inf. Comput. Vol. 3, September 2015

263

M. MOKLYACHUK AND M. SIDEI

Using the last relations and (7) we get the following system of equations that determine the unknown coefficients c(j), j ∈ S , a(Mk−1 )−

+Nl+1 s−1 Ml∑ ∑

a(Mk−1 + 1)−

c(j)rMk−1 −j = 0;

j=Ml

l=0

+Nl+1 s−1 Ml∑ ∑

c(j)rMk−1 +1−j = 0;

(8)

j=Ml

l=0

... a(Mk−1 + Nk )−

+Nl+1 s−1 Ml∑ ∑

c(j)rMk−1 +Nk −j = 0,

j=Ml

l=0

where k = 1, . . . , s. Denote by ⃗as = (⃗a1 , ⃗a2 , . . . , ⃗as ), ⃗ak = (a(Mk−1 ), . . . , Ns + s. Let Bs be a q × q matrix  B11 B12  B21 B22  Bs =  . ..  .. . Bs1

Bs2

a(Mk−1 + Nk )), k = 1, . . . , s, q = N1 + N2 + . . . + ... ... .. .

B1s B2s .. .

...

Bss

   , 

where Bmn are (Nm + 1) × (Nn + 1) matrices with elements that are Fourier coefficients of the function f −1 (λ) : 1 Bmn (k, j) = 2π

∫π

f −1 (λ)e−i(k−j)λ dλ = rk−j ,

−π

k = Mm−1 , . . . , Mm−1 + Nm , j = Mn−1 , . . . , Mn−1 + Nn , m, n = 1, . . . , s.

Making use the introduced notations we can write formulas (8) in the form of equation ⃗as = Bs⃗cs ,

(9)

where ⃗cs = (⃗c1 , ⃗c2 , . . . , ⃗cs ), ⃗ck = (c(Mk−1 ), . . . , c(Mk−1 + Nk )), k = 1, . . . , s, is a vector constructed from the unknown coefficients c(j), j ∈ S. Since the matrix Bs is reversible [31], we get the formula ⃗cs = Bs−1⃗as .

(10)

Hence, the unknown coefficients c(j), j ∈ S, are calculated by the formula ( ) c(j) = Bs−1⃗as j , ( ) where Bs−1⃗as j is the j component of the vector Bs−1⃗as , and the formula for calculating the spectral characteristic of the estimate Aˆs ξ is of the form iλ

h(e ) =

( s−1 Ml +Nl+1 ∑ ∑ l=0

j=Ml

) a(j)e

ijλ



( s−1 Ml +Nl+1 ∑ ∑ ( l=0

j=Ml

)

Bs−1⃗as j

)( e

ijλ

∞ ∑

) rm e

imλ

.

(11)

m=−∞

Stat., Optim. Inf. Comput. Vol. 3, September 2015

264

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

The mean square error of the estimate of the function can be calculated by the formula 1 ∆(h; f ) = 2π

=

∫π

Cs (eiλ ) 2 f −1 (λ)dλ =

−π

∫π (∑ +Nl+1 s−1 Ml∑ l=0

−π

c(k)e

ikλ

) ( s−1 Ml +Nl+1 ∑ ∑

k=Ml

l=0

⟩ ⟨ = ⟨⃗cs ,⃗cs Bs ⟩ = Bs−1⃗as , ⃗as ,

)( c(j)e

−ijλ

∞ ∑

) rm e

imλ

(12) dλ =

m=−∞

j=Ml

where ⟨·, ·⟩ is the inner product in C q . Let us summarize our results and present them in the form of a theorem. Theorem 2.1 Let ξ(j) be a stationary stochastic sequence with the spectral density f (λ) that satisfies the minimality condition (1). The mean square error ∆(h, f ) and the spectral characteristic h(eiλ ) of the optimal linear estimate Aˆs ξ of the s−1 ∪ functional As ξ from observations of the sequence ξ(j) at points of time j ∈ Z\S , where S = {Ml , . . . , Ml + l=0

Nl+1 }, can be calculated by formulas (12), (11).

Example 1. Consider the problem of linear interpolation of the functional A2 ξ = a(0)ξ(0) + a(1)ξ(1) + a(5)ξ(5) which depends on the unknown values ξ(0), ξ(1), ξ(5) of the stochastic sequence ξ(j) from observations at points j ∈ Z\S, where S = {0, 1} ∪ {5}. In this case the spectral characteristic (7) of the estimate Aˆ2 ξ can be calculated by the formula h(eiλ ) = (a(0) + a(1)eiλ + a(5)e5iλ ) − (c(0) + c(1)eiλ + c(5)e5iλ ) · f −1 (λ),

where f (λ) is a known spectral density, the function f −1 (λ) admits the decomposition f −1 (λ) =

(13) ∞ ∑

rm eiλm ,

m=−∞

and coefficients c(0), c(1), c(5) are determined by the system of equations a(0) = c(0)r0 + c(1)r−1 + c(5)r−5 , a(1) = c(0)r1 + c(1)r0 + c(5)r−4 , a(5) = c(0)r5 + c(1)r4 + c(5)r0 .

The matrix B2 is of the form

(

) B11 B12 , B21 B22 ( ) ( r−5 , B21 = r5 = r−4

B2 =

( B11 =

r0 r1

r−1 r0

) , B12

r4

)

, B22 =

(

r0

)

.

Let ⃗a2 = (⃗a1 , ⃗a2 ), where ⃗a1 = (a(0), a(1)), ⃗a2 = (a(5)), and let ⃗c2 = (⃗c1 , ⃗c2 ), where ⃗c1 = (c(0), c(1)), ⃗c2 = (c(5)). In this case equations (9) and (10) can be rewritten as ⃗a2 = B2⃗c2 , ⃗c2 = B2−1⃗a2 .

Denote by D = det(B2 ) = r03 − r0 r−4 r4 − r1 r−1 r0 + r1 r−5 r4 + r5 r−1 r−4 − r5 r−5 r0 .

We get the following formulas for calculating the coefficients c(0), c(1), c(5) [ ] c(0) = (r02 − r−4 r4 )a(0) − (r−1 r0 − r−5 r4 )a(1) + (r−1 r4 − r−5 r0 )a(5) · D−1 , Stat., Optim. Inf. Comput. Vol. 3, September 2015

M. MOKLYACHUK AND M. SIDEI

265

[ ] c(1) = (−r1 r0 + r−4 r5 )a(0) + (r02 − r5 r−5 )a(1) − (r0 r−4 − r−5 r1 )a(5) · D−1 , [ ] c(5) = (r1 r4 − r0 r5 )a(0) − (r0 r4 − r5 r−1 )a(1) + (r02 − r−1 r1 )a(5) · D−1 . Thus, the unknown coefficients c(0), c(1), c(5) in (13) are determined. The mean square error of the estimate is calculated by the formula [ ∆(f ) =< B2−1⃗a2 , ⃗a2 >= D−1 (a(0))2 r02 − (a(0))2 r−4 r4 − a(0)a(1)r−1 r0 + a(0)a(1)r−5 r4 +a(0)a(5)r−1 r−4 − a(0)a(5)r−5 r0 − a(0)a(1)r0 r1 + a(0)a(1)r−4 r5 + (a(1))2 r02 −(a(1))2 r5 r−5 − a(1)a(5)r0 r−4 + a(1)a(5)r−5 r1 + a(0)a(5)r1 r4 − a(0)a(5)r0 r5 ] −a(1)a(5)r0 r4 + a(1)a(5)r−1 r5 + (a(5))2 r02 − (a(5))2 r1 r−1 .

Consider this problem for the spectral density f (λ) =

In this case we have

1 |1 − αe−iλ |

2,

|α| < 1.

) 2 ( 2 f −1 (λ) = 1 − αe−iλ = 1 + |α| − α ¯ eiλ − αe−iλ

and the Fourier coefficients of the function f −1 (λ) are as follows ( ) 2 r0 = 1 + |α| , r1 = −¯ α, r−1 = −α. The spectral characteristic is calculated by the formula ( ) ( ) h(eiλ ) = a(0) + a(1)eiλ + a(5)e5iλ − c(0) + c(1)eiλ + c(5)e5iλ × ( ) 2 1 + |α| − αe ¯ iλ − αe−iλ , where the coefficients c(0), c(1), c(5) are calculated by the formulas [ ] 2 2 c(0) = (1 + |α| )2 a(0) + (α(1 + |α| ))a(1) · D−1 , [ ] 2 2 c(1) = (¯ α(1 + |α| ))a(0) + (1 + |α| )2 a(1) · D−1 , [ ] 2 2 c(5) = ((1 + |α| )2 − |α| )a(5) · D−1 , ( )( ) with D = 1 + |α|2 1 + |α|2 + |α|4 . Inserting values of c(0), c(1), c(5) into the formula for calculating the spectral characteristic we obtain that [( ( )) ( )2 2 2 e−iλ + h(eiλ ) = a(0)α 1 + |α| + a(1)α2 1 + |α| ( ( ) ( )2 ) 2 2 2 + a(0) (¯ α) 1 + |α| + a(1)¯ α 1 + |α| e2iλ + ] ( ) ( ) 2 4 2 4 +a(5)α 1 + |α| + |α| e4iλ + a(5)¯ α 1 + |α| + |α| e6iλ × (( )( ))−1 2 2 4 1 + |α| 1 + |α| + |α| . Stat., Optim. Inf. Comput. Vol. 3, September 2015

266

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

The mean square error is of the form [ 2 2 2 ∆(f ) = D−1 (a(0))2 (1 + |α| )2 + a(0)a(1)α(1 + |α| ) + a(0)a(1)(1 + |α| )¯ α ] 2 2 2 +(a(1))2 (1 + |α| )2 + (a(5))2 (1 + |α| )2 − (a(5))2 |α| . a) Let a(0) = 1, a(1) = 0, a(5) = 1. We get the following formula for calculating the spectral characteristic ( ) 2 h(eiλ ) = (1 + e5iλ ) − (c(0) + c(1)eiλ + c(5)e5iλ ) 1 + |α| − α ¯ eiλ − αe−iλ , and formulas for calculating coefficients c(0), c(1), c(5) ( ) ( ) 2 2 4 c(0) = 1 + |α| / 1 + |α| + |α| , ( ) 2 4 c(1) = α ¯ / 1 + |α| + |α| , ( ) 2 c(5) = 1/ 1 + |α| . Thus, the spectral characteristic of the estimate Aˆ2 is of the form [ ( )2 ( ) 2 2 2 iλ h(e ) = α 1 + |α| e−iλ + (α ¯ ) 1 + |α| e2iλ + ( ) ( ) ] 2 4 2 4 +α 1 + |α| + |α| e4iλ + α ¯ 1 + |α| + |α| e6iλ × (( )( ))−1 2 2 4 1 + |α| 1 + |α| + |α| . The mean square error of the estimate Aˆ2 is calculated by the formula ( ) ( )( ) 2 2 2 2 4 ∆1 = ∆(f ) = 2(1 + |α| )2 − |α| / 1 + |α| 1 + |α| + |α| . b) Let a(0) = 1, a(1) = 0, a(5) = 0. We get that ( ) 2 h(eiλ ) = 1 + (c(0) + c(1)eiλ ) 1 + |α| − αe ¯ iλ − αe−iλ , where coefficients c(0), c(1) are calculated by the formulas ( ) ( ) 2 2 4 c(0) = 1 + |α| / 1 + |α| + |α| , ( ) 2 4 c(1) = α ¯ / 1 + |α| + |α| , and c(5) = 0. The spectral characteristic of the estimate Aˆ2 ξ is of the form ( ( )( )2 ( ) )−1 2 2 2 2 4 iλ −iλ 2iλ h(e ) = α 1 + |α| e + (¯ α) 1 + |α| e 1 + |α| + |α| . The mean square error is calculated by the formula ( ) ( ) 2 2 4 ∆2 = ∆(f ) = 1 + |α| / 1 + |α| + |α| . It is obvious that ∆1 > ∆2 . Stat., Optim. Inf. Comput. Vol. 3, September 2015

M. MOKLYACHUK AND M. SIDEI

267

Example 2. Consider the problem of linear interpolation of the functional A3 ξ = a(−n)ξ(−n) + a(0)ξ(0) + a(m)ξ(m) which depends on the unknown values ξ(−n), ξ(0), ξ(m) of the stochastic sequence ξ(j) from observations of the sequence at points j ∈ Z\S, where S = {−n} ∪ {0} ∪ {m} . The spectral characteristic of the estimate Aˆ3 ξ is of the form h(eiλ ) = (a(−n)e−niλ + a(0) + a(m)emiλ ) − (c(−n)e−niλ + c(0) + c(m)emiλ ) · f −1 (λ),

where f −1 (λ) =

∞ ∑

rm eiλm , and coefficients c(−n), c(0), c(m) are calculated by the formulas

m=−∞

a(−n) = c(−n)r0 + c(0)r−n + c(m)r−n−m , a(0) = c(−n)rn + c(0)r0 + c(m)r−m , a(m) = c(−n)rn+m + c(0)rm + c(m)r0 .

The matrix B3 is of the form



 B11 B12 B13 B3 =  B21 B22 B23  , B31 B32 B33 ( ) ( ) ( ) B11 = r0 , B12 = r−n , B13 = r−n−m , ( ) ( ) ( ) B21 = rn , B22 = r0 , B23 = r−m , ( ) ( ) ( ) B31 = rn+m , B32 = rm , B33 = r0 .

Let ⃗a3 = (⃗a1 , ⃗a2 , ⃗a3 ), where ⃗a1 = (a(−n)), ⃗a2 = (a(0)), ⃗a3 = (a(m)), and let ⃗c3 = (⃗c1 , ⃗c2 , ⃗c3 ), where⃗c1 = (c(−n)), ⃗c2 = (c(0)), ⃗c3 = (c(m)). In this case equations (9) and (10) can be written as ⃗a3 = B3⃗c3 , ⃗c3 = B3−1⃗a3 .

Denote by D = det(B3 ) = r03 − r0 r−m rm − rn r−n r0 + rn r−n−m rm + rn+m r−n r−m − rn+m r−n−m r0 .

Then [ ] c(−n) = (r02 − r−m rm )a(−n) − (r−n r0 − r−n−m rm )a(0) + (r−n rm − r−n−m r0 )a(m) /D, [ ] c(0) = (−rn r0 + r−m rn+m )a(−n) + (r02 − rn+m r−n−m )a(0) − (r0 r−m − r−n−m rn )a(m) /D, [ ] c(m) = (rn rm − r0 rn+m )a(−n) − (r0 rm − rn+m r−n )a(0) + (r02 − r−n rn )a(m) /D. The mean square error is calculated by the formula [ ∆(f ) = < B3−1⃗a3 , ⃗a3 >= D−1 (a(−n))2 r02 − (a(−n))2 r−m rm − a(−n)a(0)r−n r0 +a(−n)a(0)r−n−m rm + a(−n)a(m)r−n r−m − a(−n)a(m)r−n−m r0 − a(−n)a(0)r0 rn +a(−n)a(0)r−m rn+m + (a(0))2 r02 − (a(0))2 rn+m r−n−m − a(0)a(m)r0 r−m +a(0)a(m)r−n−m rn + a(−n)a(m)rn rm − a(−n)a(m)r0 rn+m − a(0)a(m)r0 rm ] +a(0)a(m)r−n rn+m + (a(m))2 r02 − (a(m))2 rn r−n . Stat., Optim. Inf. Comput. Vol. 3, September 2015

268

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

Consider the problem of estimation of the functional A3 (ξ) = ξ(−n) + ξ(0) + ξ(m) in the case where the spectral density of the stochastic sequence is of the form

f (λ) =

1 |1 − αe−iλ |

2,

|α| < 1.

( ) In this case f −1 (λ) = 1 + |α|2 − α ¯ eiλ − αe−iλ and all Fourier coefficients of the function f −1 (λ), except r0 , r1 , r−1 , are equal to zero 0. Consider the following four cases: 1) n ̸= 1, m ̸= 1; 2) n ̸= 1, m = 1; 3) n = 1, m ̸= 1; 4) n = 1, m = 1. The spectral characteristic and the mean square error of the estimate Aˆ3 ξ of the functional A3 (ξ) are calculated by the following formulas. In the first case ( h(eiλ ) = αe(−n−1)iλ + α ¯ e(−n+1)iλ + αe−iλ + α ¯ eiλ + αe(m−1)iλ + ) 2 α ¯ e(m+1)iλ (1 + |α| )−2 , 2

∆(f ) =3/(1 + |α| );

In the second case [( ( ) )( ) 2 2 α 1 + |α| + α2 1 + |α| e−iλ + ( ( )) ( ) 2 2 2 (¯ α) + α ¯ 1 + |α| 1 + |α| e2iλ + ( ) ( ) ] 2 4 2 4 α ¯ 1 + |α| + |α| e(−n+1)iλ + α 1 + |α| + |α| e(−n−1)iλ × (( )( ))−1 2 2 4 1 + |α| 1 + |α| + |α| , ( ) 2 2 2 ∆(f ) = 3(1 + |α| ) − |α| + (1 + |α| )(α + α ¯) · (( )( ))−1 2 2 4 1 + |α| 1 + |α| + |α| ;

h(eiλ ) =

In the third case [( ( )) ( ) 2 2 (¯ α )2 + α ¯ 1 + |α| 1 + |α| eiλ + ( ( )) ( ) 2 2 α2 + α 1 + |α| 1 + |α| e−2iλ + ( ) ( ) ] 2 4 2 4 α ¯ 1 + |α| + |α| e(m+1)iλ + α 1 + |α| + |α| e(m−1)iλ × (( )( ))−1 2 2 4 1 + |α| 1 + |α| + |α| , ( ) 2 2 2 ∆(f ) = 3(1 + |α| ) − |α| + (1 + |α| )(α + α ¯) · (( )( ))−1 2 2 4 1 + |α| 1 + |α| + |α| ;

h(eiλ ) =

Stat., Optim. Inf. Comput. Vol. 3, September 2015

269

M. MOKLYACHUK AND M. SIDEI

In the fourth case

[( ( ) ( )) 2 2 4 (¯ α)2 + α ¯ 1 + |α| + 1 + |α| + |α| α ¯ e2iλ + (( ) ( ) ) ] 2 4 2 2 1 + |α| + |α| + α 1 + |α| + |α| αe−2iλ × ( )−1 2 4 6 1 + |α| + |α| + |α| , ( ) 2 2 2 ∆(f ) = 3(1 + |α| ) − |α| + (1 + |α| )(α + α ¯ ) + (α2 + α ¯2) · )( ))−1 (( 2 2 4 1 + |α| 1 + |α| + |α| .

h(eiλ ) =

3. Minimax-robust method of interpolation The traditional methods of estimation of the functional As ξ which depends on unknown values of a stationary stochastic sequence ξ(j) can be applied in the case where the spectral density f (λ) of the considered stochastic sequence ξ(j) is exactly known. In practise, however, we do not have complete information on spectral density of the sequence. For this reason we apply the minimax(robust) method of estimation of the functional As ξ , that is we find an estimate that minimizes the maximum of the mean square errors for all spectral densities from the given class of admissible spectral densities D. Definition 3.1. For a given class of spectral densities D a spectral density f0 (λ) ∈ D is called the least favourable in D for the optimal linear estimation of the functional As ξ if the following relation holds true ∆ (f0 ) = ∆ (h (f0 ) ; f0 ) = max ∆ (h (f ) ; f ) . f ∈D

Definition 3.2. For a given class of spectral densities D the spectral characteristic h0 (eiλ ) of the optimal linear estimate of the functional As ξ is called minimax-robust if ∩ h0 (eiλ ) ∈ HD = Ls2 (f ), f ∈D

( ) min max ∆ (h; f ) = sup ∆ h0 ; f .

h∈HD f ∈D

f ∈D

It follows from the introduced definitions and the obtained formulas that the following statement holds true. Lemma 3.1 The spectral density f0 (λ) ∈ D is the least favourable in the class of admissible spectral densities D for the optimal linear estimate of the functional As ξ if the Fourier coefficients of the function f0−1 (λ) define a matrix Bs0 that is a solution to the optimization problem ⟨ ⟩ ⟨ ⟩ max Bs−1⃗as , ⃗as = (Bs0 )−1⃗as , ⃗as . (14) f ∈D

The minimax spectral characteristic h = h(f0 ) can be calculated by the formula (11) if h(f0 ) ∈ HD . 0

The least favourable spectral density f0 and the minimax spectral characteristic h0 form a saddle point of the function ∆ (h; f ) on the set HD × D. The saddle point inequalities ( ) ( ) ∆ (h; f0 ) ≥ ∆ h0 ; f0 ≥ ∆ h0 ; f ∀f ∈ D, ∀h ∈ HD hold true if h0 = h(f0 ) and h(f0 ) ∈ HD , where f0 is a solution to the constrained optimization problem ∫π 0 iλ 2 ( 0 ) Cs (e ) 1 ˜ ) = −∆ h ; f = − ∆(f f (λ)dλ → inf, f (λ) ∈ D, 2π f02 (λ)

(15)

−π

Stat., Optim. Inf. Comput. Vol. 3, September 2015

270

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

where Cs0 (eiλ ) =

+Nl+1 s−1 Ml∑ ∑ ( 0 −1 ) ijλ (Bs ) ⃗as j e . l=0

j=Ml

The constrained optimization problem (15) is equivalent to the unconstrained optimization problem ˜ ) + δ(f |D ) → inf, ∆D (f ) = ∆(f

where δ(f |D ) is the indicator function of the set D. Solution f0 to this problem is characterized by the condition 0 ∈ ∂∆D (f0 ), where ∂∆D (f0 ) is the subdifferential of the convex functional ∆D (f ) at point f0 . This condition makes it possible to find the least favourable spectral densities in some special classes of spectral densities D [14], [32], [33]. ( ) Note, that the form of the functional ∆ h0 ; f is convenient for application the Lagrange method of indefinite multipliers for finding solution to the problem (15). Making use the method of Lagrange multipliers and the form of subdifferentials of the indicator functions we describe relations that determine least favourable spectral densities in some special classes of spectral densities (see books [11, 29, 30] for additional details). 4. Least favourable spectral densities in the class D0− Consider the problem of the optimal estimation of the functional As ξ which depends on the unknown values of a stationary stochastic sequence ξ(j) in the case where the spectral density is from the class   ∫π   1 D0− = f (λ) f −1 (λ)dλ ≥ P   2π −π

Let the sequence a(k), k ∈ S, that determines the functional As ξ , be strictly positive. To find solutions to the constrained optimization problem (15) we use the Lagrange multipliers method. With the help of this method we get the equation ] ∫π [ 0 iλ 2 Cs (e ) 1 1 2 − p0 2 ρ(f (λ))dλ = 0, 2π f02 (λ) f0 (λ) −π

where p20 is a constant (the Lagrange multiplier), ρ(f (λ)) is a variation of the function f (λ). From a generalization of the Lagrange lemma we get that the Fourier coefficients of the function f0−1 satisfy the equation s−1 M +N 2 ∑ l∑l+1 c(k)eikλ = p20 , l=0

(16)

k=Ml

where c(k), k ∈ S , are components of the vector ⃗cs that satisfies the equation Bs0⃗cs = ⃗as , the matrix Bs0 consists 0 from matrices Bmn (k, j), each of which is determined by the Fourier coefficients of the function f0−1 (λ) 0 Bmn (k, j)

1 = 2π

∫π

0 f0−1 (λ)e−i(k−j)λ dλ = rk−j ,

−π

k = Mm−1 , . . . , Mm−1 + Nm , j = Mn−1 , . . . , Mn−1 + Nn , m, n = 1, . . . , s. Stat., Optim. Inf. Comput. Vol. 3, September 2015

271

M. MOKLYACHUK AND M. SIDEI

The Fourier coefficients rk = r−k , k ∈ S, satisfy both equation (16) and equation Bs0⃗cs = ⃗as . These coefficients can be found from the equation Bs0 ⃗p0s = ⃗as , where ⃗p0s = (p0 , 0, . . . , 0). The last relation can be presented in the form of the system of equations rk p0 = a(k), k ∈ S.

From the first equation of the system (for k = 0) we find the unknown value p0 = a(0)(r0 )−1 . It follows from the extremum condition (14) and the restriction on the spectral densities from the class D0− that the Fourier coefficient ∫π −1 1 r0 = 2π f0 (λ)dλ = P . Thus, −π

rk = P a(k)a−1 (0).

{

Let rk = r−k =

P a(k)a−1 (0) if k ∈ S ; 0 if k ∈ {0, . . . , Ms−1 + Ns } \S .

We can represent the function f0−1 (λ) in the form ∑

Ms−1 +Ns

f0−1 (λ)

=

rk eikλ .

k=−(Ms−1 +Ns )

Since the sequence a(k), k ∈ S, is strictly positive, the sequence rk , k = 0, 1, . . . , Ms−1 + Ns , is also strictly positive and the function f0−1 (λ) is positive, so it can be represented in the form [19] M +N 2 s−1 ∑ s f0−1 (λ) = γk e−ikλ , λ ∈ [−π, π] , k=0

where γk = 0, k ∈ {0, . . . , Ms−1 + Ns } \S. Hence, f0 (λ) is the spectral density of the autoregressive stochastic sequence of order Ms−1 + Ns generated by the equation ∑

Ms−1 +Ns

γk ξ(n − k) =

k=0

+Nl+1 s−1 Ml∑ ∑ l=0

γk ξ(n − k) = ϵn ,

(17)

k=Ml

where ϵn is a “white noise” sequence. The minimax spectral characteristic h(f0 ) of the optimal linear estimate of the functional As ξ can be calculated by the formula (5), where +Nl+1 s−1 Ml∑ ∑ CN (eiλ ) = c(k)eikλ = p0 = P −1 a(0), l=0

k=Ml

namely h(f0 ) =

+Nl+1 s−1 Ml∑ ∑ l=0

=

a(k)e

−P

−1

a(0)

k=Ml N1 ∑ k=1



Ms−1 +Ns ikλ

a(k)e−ikλ +

rk eikλ

k=−(Ms−1 +Ns ) +Nl+1 s−1 Ml∑ ∑ l=1

(18)

a(k)e−ikλ .

k=Ml

Summing up our reasoning we come to conclusion that the following theorem holds true. Theorem 4.1 The least favourable in the class D0− spectral density for the optimal linear estimation of the functional As ξ determined by strictly positive sequence a(k), k ∈ S , is the spectral density of the autoregressive sequence (17) whose Fourier coefficients are rk = r−k = P a(k)a−1 (0), k ∈ S . The minimax spectral characteristics h(f0 ) is given by formula (18). Stat., Optim. Inf. Comput. Vol. 3, September 2015

272

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

Example 3. Consider the problem of the optimal linear estimation of the functional A2 ξ = 2ξ(0) + ξ(2) which depends on the unknown values ξ(0), ξ(2) of the stationary stochastic sequence {ξ(j) : j ∈ Z} from observations of the sequence at points of time Z\ {0, 2} . The system of equations that determine the Fourier coefficients of the least favourable spectral density in the class D0− is of the form r0 p0 = 2; r2 p0 = 1.

We have r0 = P, r2 = r−2 = 12 P. The least favourable spectral density is of the form f0 = 1/ x + ye2iλ , √ √ where x = ± P2 , y = ± P2 . The minimax spectral characteristic can by calculated by the formula √ h(f0 ) = −

P −2iλ e . 2

5. Least favourable spectral densities in the class DW Consider the problem of the optimal estimation of the functional As ξ which depends on the unknown values of a stationary stochastic sequence ξ(j) in the case where the spectral density of the sequence is from the set of spectral densities with restrictions on the moments of the function f −1 (λ). Let   ∫π   1 DW = f (λ) f −1 (λ) cos(wλ)dλ = rw , w = 0, 1, . . . , W ,   2π −π

where rw , w = 0, 1, . . . , W is a strictly positive sequence. There is an infinite number of functions in the class DW [19] and the function W ∑ f −1 (λ) = r|w| eiwλ > 0, λ ∈ [−π, π] . w=−W

To find solutions to the constrained optimization problem (15) for the set DW of admissible spectral densities we use the Lagrange multipliers method and the equation 2 2 +Nl+1 s−1 Ml∑ W W ∑ ∑ ∑ ikλ iwλ c(k)e = αw cos(wλ) = p(w)e , l=0

w=0

k=Ml

(19)

w=0

where αw , w = 0, 1, . . . , W are the Lagrange multipliers and c(k), k = 0, . . . , W are solutions to the equation Bs0⃗cs = ⃗as . Consider two cases: W ≥ Ms−1 + Ns and W < Ms−1 + Ns . Let W ≥ Ms−1 + Ns . In this case the given Fourier coefficients rw define the matrix Bs0 and the optimization problem (14) is degenerate. Let p(Ms−1 + / S. Components p(j), j ∈ S , of the vector ⃗ps can be found from the Ns + 1) = . . . = p(W ) = 0 and p(j) = 0, j ∈ equation Bs0 ⃗ps = ⃗as . Hence, the relation (19) holds true. Thus the least favorable is every density f (λ) ∈ DW and the density of the autoregression stochastic sequence W W ∑ ∑ (20) f0 (λ) = 1/ r|w| eiwλ = 1/ γk eikλ w=−W

k=0

is least favorable, too. Stat., Optim. Inf. Comput. Vol. 3, September 2015

273

M. MOKLYACHUK AND M. SIDEI

Let W < Ms−1 + Ns . Then the matrix Bs is determined by the known rw , w ∈ S ∩ {0, . . . , W }, and the unknown rw , w ∈ S\ {0, . . . , W }, Fourier coefficients of the function f −1 (λ). The unknown coefficients p(k), k ∈ S ∩ {0, . . . , W }, and rw , w ∈ S\ {0, . . . , W }, can be found from the equation Bs ⃗p0s = ⃗as , with ⃗p0s = (p(0), . . . , p(W1 ), 0, . . . , 0), where W1 is determined from the relation {0, . . . , W1 } = {0, . . . , W } ∩ S. The last equation can be presented as the system of equations r0 p0 + r1 p1 + . . . + r−W1 pW1 = a(0); r1 p0 + r0 p0 + . . . + r−W1 +1 pW1 = a(1); ... rW1 p0 + rW1 −1 p0 + . . . + r0 pW1 = a(W1 ); ... rMs−1 +Ns p0 + r1 p1 + . . . + r0 pW1 = a(Ms−1 + Ns ).

From the first W1 equations we can find the unknown coefficients p(k) and from the next equations we find the Fourier coefficients rw , w ∈ S\ {0, . . . , W }. If the sequence rw , w ∈ S, that is constructed from the strictly positive sequence rw , w ∈ S ∩ {0, . . . , W } and the calculated coefficients rw , w ∈ S\ {0, . . . , W }, is also strictly positive, then the least favourable spectral density f0 (λ) is determined by the Fourier coefficients rw , w ∈ S of the function f0−1 (λ) ∑

Ms−1 +Ns

f0 (λ) = 1/

(rk e

ikλ

+ r−k e

k=0

−ikλ

M +N 2 s s−1 ∑ ikλ ) = 1/ γk e .

(21)

k=0

Let us summarize our results and present them in the form of a theorem. Theorem 5.1 The least favourable spectral density in the class DW for the optimal linear estimate of the functional As ξ in the case where W ≥ Ms−1 + Ns is the spectral density (20) of the autoregression stochastic sequence of order W determined by coefficients rw , w = 0, 1, . . . , W. In the case where W < Ms−1 + Ns and solutions rw , w ∈ S\ {0, . . . , W }, to equation Bs ⃗p0s = ⃗as together with coefficients rw , w ∈ S ∩ {0, . . . , W }, form a strictly positive sequence, the least favourable spectral density in DW is the density (21) of the autoregression stochastic sequence of the order Ms−1 + Ns . The minimax characteristic of the estimate is calculated by formula (11).

6. Least favourable spectral densities in the class Dvu Consider the problem of the optimal estimation of the functional As ξ which depends on the unknown values of a stationary stochastic sequence ξ(j) in the case where the spectral density of the sequence is from the set of spectral densities   ∫π   1 Duv = f (λ) 0 ≤ v(λ) ≤ f (λ) ≤ u(λ), f −1 (λ)dλ = P ,   2π −π

where v(λ), u(λ) are given bounded spectral densities. Let the sequence a(j), j ∈ S, that determines the functional As ξ be strictly positive. To find solutions to the constrained optimization problem (15) for the set Duv of admissible spectral densities we use the condition 0 ∈ ∂∆D (f0 ). It follows from the condition 0 ∈ ∂∆D (f0 ) for D = Duv that the Fourier coefficients of the function f0−1 satisfy both equation Bs0⃗cs = ⃗as and the equation 2 +Nl+1 ( s−1 Ml∑ ∑ ( 0 )−1 ) ikλ ⃗as e = ψ1 (λ) + ψ2 (λ) + p−2 Bs 0 , k l=0

k=Ml

Stat., Optim. Inf. Comput. Vol. 3, September 2015

274

INTERPOLATION PROBLEM WITH MISSING OBSERVATIONS

where ψ1 (λ) ≥ 0 and ψ1 (λ) = 0 if f0 (λ) ≥ v(λ); ψ2 (λ) ≤ 0 and ψ2 (λ) = 0 if f0 (λ) ≤ u(λ). Therefore, in the case where v(λ) ≤ f0 (λ) ≤ u(λ), the function f0−1 (λ) is of the form ∑

Ms−1 +Ns

f0−1 (λ)

=

(rk e

ikλ

+ r−k e

−ikλ

k=0

M +N 2 s s−1 ∑ ikλ )= γk e , k=0

where rk = r−k = P a(k)a−1 (0). The least favourable in the class Duv is the density of the autoregression stochastic sequence of the order Ms−1 + Ns if the following inequality holds true M +N 2 Ms−1 +Ns s s−1 ∑ ∑ −1 ikλ −ikλ ikλ v (λ) ≥ (rk e + r−k e )= γk e ≥ u−1 (λ), λ ∈ [−π, π] . (22) k=0

k=0

In general case the least favourable density is of the form   s−1 M +N 2    ∑ l∑l+1 (( )  ) −1 ⃗as eikλ f0 (λ) = max v(λ), min u(λ), p0 Bs0 .    k l=0

(23)

k=Ml

The following theorem holds true. Theorem 6.1 If the sequence a(j), j ∈ S, is strictly positive and coefficients rk = r−k = P a(k)a−1 (0), k ∈ S, satisfy the inequality (22), then the least favourable in the class Dvu spectral density for the optimal linear estimate of the functional As ξ is density (17) of the autoregression stochastic sequence of order Ms−1 + Ns . The minimax characteristic h(f0 ) of the estimate can be calculated by the formula (18). If the inequality (22) is not satisfied, then the least favourable spectral density in Dvu is determined by relation (23) and the extremum condition (14). The minimax characteristic of the estimate is calculated by formula (11).

7. Conclusions In this article we describe methods of solution of the problem of the mean-square optimal linear estimation of s−1 l ∑ Ml +N ∑ l+1 ∑ the functional As ξ = a(j)ξ(j), Ml = (Nk + Kk ), N0 = K0 = 0, which depends on the unknown l=0

j=Ml

k=0

values of the stationary stochastic sequence ξ(j). Estimates are based on observations of the sequence ξ(j) at points s−1 ∪ j ∈ Z\S , where S = {Ml , Ml + 1, . . . , Ml + Nl+1 }. We provide formulas for calculating the values of the mean l=0

square error and the spectral characteristic of the optimal linear estimate of the functional in the case where the spectral density of the sequence ξ(j) is exactly known. In the case where the spectral density is unknown, but a set of admissible spectral densities is given, the minimax approach is applied. We obtain formulas that determine the least favourable spectral densities and the minimax spectral characteristics of the optimal linear estimates of the functional As ξ for concrete classes of admissible spectral densities. It is shown that spectral densities the autoregressive stochastic sequences are the least favourable in some classes of spectral densities.

REFERENCES 1. P. Bondon, Influence of missing values on the prediction of a stationary time series, Journal of Time Series Analysis, vol. 26, no. 4, pp. 519-525, 2005. 2. P. Bondon, Prediction with incomplete past of a stationary process, Stochastic Process and their Applications. vol.98, pp. 67-76, 2002. 3. I. I. Dubovets’ka, O.Yu. Masyutka, and M.P. Moklyachuk, Interpolation of periodically correlated stochastic sequences, Theory of Probability and Mathematical Statistics, vol. 84, pp. 43-56, 2012. Stat., Optim. Inf. Comput. Vol. 3, September 2015

M. MOKLYACHUK AND M. SIDEI

275

4. I. I. Dubovets’ka, and M. P. Moklyachuk, Filtration of linear functionals of periodically correlated sequences, Theory of Probability and Mathematical Statistics, vol. 86, pp. 51-64, 2013. 5. I. I. Dubovets’ka, and M. P. Moklyachuk, Extrapolation of periodically correlated processes from observations with noise, Theory of Probability and Mathematical Statistics, vol. 88, pp. 43-55, 2013. 6. I. I. Dubovets’ka, and M. P. Moklyachuk, Minimaxestimation problem for periodically correlated stochastic processes, Journal of Mathematics and System Science, vol. 3, no. 1, pp. 26-30, 2013. 7. I. I. Dubovets’ka, and M. P. Moklyachuk, On minimax estimation problems for periodically correlated stochastic processes, Contemporary Mathematics and Statistics, vol.2, no. 1, pp. 123-150, 2014. 8. J. Franke, Minimax robust prediction of discrete time series, Z. Wahrscheinlichkeitstheor. Verw. Gebiete, vol. 68, pp. 337–364, 1985. 9. J. Franke and H. V. Poor, Minimax-robust filtering and finite-length robust predictors, Robust and Nonlinear Time Series Analysis. Lecture Notes in Statistics, Springer-Verlag, vol. 26, pp. 87–126, 1984. 10. I. I. Gikhman and A. V. Skorokhod, The theory of stochastic processes. I., Berlin: Springer, 2004. 11. I. I. Golichenko and M. P. Moklyachuk, Estimates of functionals of periodically correlated processes, Kyiv: NVP “Interservis”, 2014. 12. U. Grenander, A prediction problem in game theory, Arkiv f¨or Matematik, vol. 3, pp. 371–379, 1957. 13. E. J. Hannan, Multiple time series, Wiley Series in Probability and Mathematical Statistics. New York etc.: John Wiley & Sons, Inc. XI, 1970. 14. A. D. Ioffe, and V. M. Tihomirov, Theory of extremal problems, Studies in Mathematics and its Applications, Vol. 6. Amsterdam, New York, Oxford: North-Holland Publishing Company. XII, 1979. ¨ 15. K. Karhunen, Uber lineare Methoden in der Wahrscheinlichkeitsrechnung, Annales Academiae Scientiarum Fennicae. Ser. A I, vol. 37, 1947. 16. Y. Kasahara, and M. Pourahmadi, and A. Inoue, Duals of random vectors and processes with applications to prediction problems with missing values, Statistics & Probability Letters, vol. 79, no. 14, pp. 1637–1646, 2009. 17. S. A. Kassam and H. V. Poor, Robust techniques for signal processing: A survey, Proceedings of the IEEE, vol. 73, no. 3, pp. 433–481, 1985. 18. A. N. Kolmogorov, Selected works by A. N. Kolmogorov. Vol. II: Probability theory and mathematical statistics. Ed. by A. N. Shiryayev, Mathematics and Its Applications. Soviet Series. 26. Dordrecht etc. Kluwer Academic Publishers, 1992. 19. M. G. Krein and A. A. Nudelman, The Markov moment problem and extremal problems, Translations of Mathematical Monographs. Vol. 50. Providence, R.I.: American Mathematical Society, 1977. 20. M. M. Luz and M. P. Moklyachuk, Interpolation of functionals of stochastic sequences with stationary increments, Theory of Probability and Mathematical Statistics, vol. 87, pp. 117-133, 2012. 21. M. M. Luz, and M. P. Moklyachuk, Minimax-robust filtering problem for stochastic sequence with stationary increments, Theory of Probability and Mathematical Statistics, vol. 89, pp. 127 - 142, 2014. 22. M. Luz, and M. Moklyachuk, Minimax-robust filtering problem for stochastic sequences with stationary increments and cointegrated sequences, cStatistics, Optimization & Information Computing, vol. 2, no. 3, pp. 176 - 199, 2014. 23. M. Luz, and M. Moklyachuk, Minimax interpolation problem for random processes with stationary increments, Statistics, Optimization & Information Computing, vol. 3, no. 1, pp. 30-41, 2015. 24. M. Luz, and M. Moklyachuk, Minimax-robust prediction problem for stochastic sequences with stationary increments and cointegrated sequences, Statistics, Optimization & Information Computing, vol. 3, no. 2, pp. 160-188, 2015. 25. M. P. Moklyachuk, Minimax extrapolation and autoregressive-moving average processes, Theory of Probability and Mathematical Statistics, vol. 41, pp. 77–84, 1990. 26. M. P. Moklyachuk, Stochastic autoregressive sequences and minimax interpolation, Theory of Probability and Mathematical Statistics, vol. 48, pp. 95-103, 1994. 27. M. P. Moklyachuk, Robust procedures in time series analysis, Theory of Stochastic Processes, vol. 6, no. 3-4, pp. 127-147, 2000. 28. M. P. Moklyachuk, Game theory and convex optimization methods in robust estimation problems, Theory of Stochastic Processes, vol. 7, no. 1-2, pp. 253–264, 2001. 29. M. P. Moklyachuk, Robust estimations of functionals of stochastic processes, Kyiv University, Kyiv, 2008. 30. M. Moklyachuk and O. Masyutka, Minimax-robust estimation technique for stationary stochastic processes, LAP LAMBERT Academic Publishing, 2012. 31. M. Pourahmadi, A. Inoue and Y. Kasahara A prediction problem in L2 (w). Proceedings of the American Mathematical Society. Vol. 135, No. 4, pp. 1233-1239, 2007. 32. B. N. Pshenichnyj, Necessary conditions of an extremum, Pure and Applied mathematics. 4. New York: Marcel Dekker, 1971. 33. R. T. Rockafellar, Convex Analysis, Princeton University Press, 1997. 34. Yu. A. Rozanov, Stationary stochastic processes, San Francisco-Cambridge-London-Amsterdam: Holden-Day, 1967. 35. H. Salehi, Algorithms for linear interpolator and interpolation error for minimal stationary stochastic processes, The Annals of Probability, Vol. 7, No. 5, pp. 840–846, 1979. 36. K. S. Vastola and H. V. Poor, An analysis of the effects of spectral uncertainty on Wiener filtering, Automatica, vol. 28, pp. 289–293, 1983. 37. N. Wiener, Extrapolation, interpolation and smoothing of stationary time series. With engineering applications, The M. I. T. Press, Massachusetts Institute of Technology, Cambridge, Mass., 1966. 38. A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 1: Basic results, Springer Series in Statistics, Springer-Verlag, New York etc., 1987. 39. A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 2: Supplementary notes and references, Springer Series in Statistics, Springer-Verlag, New York etc., 1987.

Stat., Optim. Inf. Comput. Vol. 3, September 2015