ISSN 10634541, Vestnik St. Petersburg University. Mathematics, 2013, Vol. 46, No. 3, pp. 118–125. © Allerton Press, Inc., 2013. Original Russian Text © N.O. Amelina, 2013, published in Vestnik SanktPeterburgskogo Universiteta. Seriya 1. Matematika, Mekhanika, Astronomiya, 2013, No. 3, pp. 12–20.
MATHEMATICS
Local Voting Protocol for Decentralized Load Balancing of Network with Switched Topology and Noise in Measurements N. O. Amelina St. Petersburg State University email:
[email protected] Received March 28, 2013
Abstract—The applicability of local voting protocol for the nonstationary problem of load balancing in a decentralized network with switched topology and measurement noise is investigated. The obtained theoretical results are illustrated by simulation models. The system is considered with and without the redistribution of tasks. It is shown that an adaptive multiagent strategy with the redistribu tion of tasks among the neighbors is better for handling the distribution of tasks than a strategy in which tasks are sent to random nodes and are not redistributed after arrival. Keywords: local voting protocol, decentralized load balancing, switched topology, noise, stochastic network. DOI: 10.3103/S1063454113030035
1. INTRODUCTION Nowadays, distributed network systems with parallel calculations are used increasingly often. It is important to efficiently solve the problem of distributing packages of tasks among the nodes in such sys tems. Similar problems arise in the production, transport, sensor, and other types of networks. Tasks may be implemented by the node (agent) that receives them or redistributed among the nodes (agents). The problem of load balancing in the network implies that equal loads are maintained for all nodes (agents) of the network over time. The number of research papers dedicated to load balancing in networks has increased recently, which indicates the scientific relevance of the problem. The majority of these papers are in the field of computer sciences and system programming. Generally, the possible noise and failures of links between the nodes are not considered in these papers. This assumption may be quite realistic for the case of one computer with multiple kernels. However, the role of noise and failures of links is more significant when network systems are considered. Local voting protocol based on stochastic approximation algorithm for the network of agents with switched topology and measurement noise for the case of linear dynamics was validated in [1]. Stochastic gradientlike algorithms were also previously used in similar problems [2–5]. Stochastic approximation algorithms with decreasing stepsize are inapplicable under external disturbances and when agents are nonstationary (due to arrival of new tasks, changes in productivity, and etc.). The efficiency of stochastic approximation algorithms with constant stepsize in the context of nonstationary quality functionals, namely, average risk, is investigated in [6–9]. Their applicability to the load balancing of nodes of a cen tralized computation network, given that the current noisy information on queue length and node produc tivity is available, was investigated in [10, 11]. Based on the generalized results of earlier papers, the authors considered network systems for parallel computations, in which tasks of the same type are sent to various nodes and redistributed among them according to the local voting protocol with constant stepsize, under various assumptions [12–17]. The loadbalancing problem was restated in terms of achieving a consensus in a dynamic network. An analysis of the studied stochastic dynamic system was performed using Derevitskii–Fradkov–Ljung (DFL) scheme [18–20] via reduction to the corresponding averaged continuous model. In this case, the time of 118
LOCAL VOTING PROTOCOL FOR DECENTRALIZED LOAD BALANCING
119
reaching approximate consensus was determined by the topology of the obtained averaged continuous model. The obtained theoretical results were applied to investigating the applicability of local voting protocols for load balancing in a decentralized network with switched topology, measurement noise, and delays in a stationary case in which all tasks arrived at the system at the initial instant [14]. The present paper focuses on the applicability of the local voting protocol for load balancing in a decentralized network with switched topology and measurement noise in a nonstationary case. 2. STATEMENT OF THE PROBLEM We consider a network composed of n agents that implement the tasks of the same type received in par allel. The tasks arrive at the system at different discrete instants t = 0, 1, …, T and to various nodes. The redistribution of tasks among agents based on feedback is allowed in the network. Let i = 1, …, n be the number of agent and N = {1, …, n} the set of agents in the network. At any time t, the state of the agent i ∈ N is described by two characteristics as follows: i
• q t is the queue length of the atomic elementary tasks of the node i at time t; i
• p t is the productivity of the node i at time t. Here and below, the superscript of node i indicates the corresponding number of the node. Under quite common assumptions, it may be considered that the dynamic model of the system is described by the following equations: i
i
i
i
i
i ∈ N,
qt + 1 = qt – pt + zt + ut ,
t = 0, 1, …, T,
(1)
where u t ∈ ⺢ are the controls (tasks redirected to node i at time t) that may (and should) be selected and i
i
z t is the size of the new task received by node i at time t. Equal loads should be maintained for all nodes of the network. The following notation and terminology of graph theory will be used below. The dynamic network of n agents implies the set of dynamic systems (agents), which interact according to the graph of information connections. The graph (N, E) is defined by a set of nodes N and set of edges E. We assign weights ai, j > 0 to all edges ( j, i) ∈ E. The graph may be represented by an adjacency matrix (or connectivity matrix) A = [ai, j] with weights ai, j > 0, if ( j, i) ∈ E, and ai, j = 0 otherwise. The case of ai, i = 0 will be considered. We denote the graph represented by the adjacency matrix A by ᏳA. We determine weighted indegree of node i as a sum of the ith row of the matrix A: di(A) =
∑
i, j n a . j=1 i
We determine the diagonal matrix of weighted indegrees of nodes of the graph D(A) =
diag{d (A)} based on indegrees and Laplacians of the graph ᏸ(A) = D(A) – A. It should be noted that sums of Laplacian elements by rows are zero. We denote the maximum indegree of the graph ᏳA by dmax(A). We assume that the structure of links of the dynamic network is modeled by a sequence of directed graphs {(N, Et)}t ≥ 0, where Et ⊆ E changes over time. We denote the corresponding adjacency matrices by i, j
i
i, j
At, N t = { j : a t > 0} is the set of neighbors of node i ∈ N at time t, Emax = {( j, i) : sup t ≥ 0 a t > 0} is the maximum set of communication links. We assume that the following condition is satisfied: i
A1: p t ≥ p min > 0 , ∀i ∈ N , t = 0, 1, … . i
i
i
If we accept x t = q t /p t as the state of node i of the dynamic network at time t = 0, 1, …, T, then the control goal, i.e., achieving consensus in the network, will correspond to the optimal task distribution among nodes [14]. By the introduced notation, dynamics equations for each agent may be rewritten as follows: i
i
i
i
xt + 1 = xt + ut + ft , i ut
i i u t /p t ,
i
(2) i i z t /p t
where = i ∈ N are normalized equations and f t = –1 + are disturbances. To maintain equal loads for all nodes of the network (in order to increase the total capacity of the sys tem and reduce implementation time of tasks), it is natural to use the protocol of task redistribution over time. VESTNIK ST. PETERSBURG UNIVERSITY. MATHEMATICS
Vol. 46
No. 3
2013
120
AMELINA i
We assume that, to form its control strategy u t , each node i ∈ N uses noisy data on its own state i, i
yt and, if set
i Nt
i, i
i
= xt + wt
is not empty, noisy measurements of states of its neighbors i, j
i, j
j
i
j ∈ Nt ,
yt = xt + wt , where
i, i wt
(3)
and
i, j wt
(4)
indicate noise.
3. PROTOCOL OF TASK REDISTRIBUTION The properties of control strategy called “local voting protocol” for the problem of load balancing in a network were studied in [13, 16]. In this algorithm, controls for each node were defined by weighted sums of differences between the data on the node state and the data on states of its neighbors as follows:
∑b
i
ut = γ
i, j i, j t ( yt
i, i
– y t ),
(5)
i
j ∈ Nt
i
i, j
i
i, j
i
where γ > 0 is the stepsize of control protocol, N t ⊂ N t , b t > 0 ∀j ∈ N t . We set b t = 0 for other pairs i, j
(i, j) and determine matrices Bt = [ b t ]. It should be noted that this protocol is different from that often encountered, in which the stepsize parameter of the control protocol γ is selected differently for various i (e.g., γi = 1/d i(Bt) [19]). The dynamics of the closedloop system with protocol (5) is as follows: i
i
xt + 1 = xt + γ
∑b
i, j i, j t ( yt
i, i
i
– yt ) + ft ,
i ∈ N.
(6)
i
j ∈ Nt
We write expression (6) in vectormatrix form as follows: x t + 1 = x t – γ ᏸ ( B t )x t + γw t + f t ; here, xt, wt, and ft indicate vectors composed of the corresponding elements 1, j
w t ) , …,
∑
n, j
n
j ∈ Nt
n, n
bt ( wt
n, j
1
(7) 1 xt ,
…,
∑
n xt ,
1, j 1, 1 b ( wt 1 t
j ∈ Nt
–
n
– w t ) , and f t , …, f t .
4. ASSUMPTIONS ON STOCHASTIC PROPERTIES Assume that (Ω, Ᏺ, P) is the underlying probability space, and the following conditions are satisfied: i, j
i
A2: (a) For all i ∈ N, j ∈ N t ∪ {i}, measurement noises w t are centered and independent random vari i, j
2
ables are distributed under the same law with bounded variance as follows: E( w t )2 ≤ σ w . Here and below, E indicates the mathematical expectation. (b) For all i ∈ N, j ∈ N max , the appearance of variable edges ( j, i) in the graph Ᏻ At are independent random events (i.e. matrices At are independent and identically distributed random matrices). For all i ∈ i
i
i, j
i, j
N, j ∈ N t , weights b t in the control protocol are independent random variables with expectations Eb t = i, j
i, j
2
bi, j and bounded variance E( b t – b )2 ≤ σ b . i
(c) For all i ∈ N, t = 0, 1, …, values of f t in (2) are random, independent, and identically distributed i
i
2
with expectation Ef t = f and variance E( f t – f )2 = σ f . In addition, all these random variables are mutually independent. A3: Graph Ᏻ Amax determined by the adjacency matrix Amax with elements i, j
i, j
a max = b ,
i, j ∈ N,
VESTNIK ST. PETERSBURG UNIVERSITY. MATHEMATICS
Vol. 46
No. 3
2013
LOCAL VOTING PROTOCOL FOR DECENTRALIZED LOAD BALANCING
121
i, j
has a spanning tree and a max > 0 for any edge ( j, i) ∈ Emax. A4: The step parameter of the control protocol γ > 0 satisfies the conditions 1 γ ≤ d max ( A max ) and λ max ( Q )γ ≤ Re ( λ 2 ( A max ) ),
(8)
(9)
where Re( λ 2 ( A max ) ) is the real part of the second eigenvalue of the matrix Amax in terms of absolute value, and λmax(Q) is the maximum eigenvalue of the matrix Q = E ( ᏸ ( A max ) – ᏸ ( B t ) ) ( ᏸ ( A max ) – ᏸ ( B t ) ). It should be noted that ᏸ(Amax) = Eᏸ(Bt) is fulfilled by definition and due to properties A2b. In addi tion, we have 0 < Re( λ 2 ( A max ) ) < 1, when condition A3 is satisfied ([12]). T
5. THE MAIN RESULTS i 2
Definition, n Nodes reach asymptotic meansquare εconsensus, if E x 0
< ∞, i ∈ N and there is a
2
i
sequence { x *t } such that lim t → ∞ E x t – x *t ≤ ε for all i ∈ N. Here and below, ||·|| indicates the Euclidean norm of the vector. Let x *0 be the average value of the initial data 1 x 0* = n
n
∑x
i 0
i=1
and { x t* } is the trajectory of the averaged system x t*+ 1 = x t* + f ,
(10)
where f is the average value from condition A2c. Theorem 1. If conditions A1–A4 are satisfied, then the following estimation is fulfilled for trajectories of systems (7) and (10): 2 2 t Δ Δ E x t + 1 – x *t + 1 1 ≤ + ( 1 – ρ ) ⎛ x 0 – x *0 1 – ⎞ , (11) ⎝ ρ ρ⎠ where 1 is the vector of ones, 2
ρ = γRe ( λ 2 ( A max ) ) – γ λ max ( Q ),
n n 2 2⎛ 2 2 2 i, j 2⎞ Δ = 2σ w γ ⎜ n σ b + ( b ) ⎟ + nσ f , ⎝ ⎠ i = 1j = 1
∑∑ i 2
i.e. asymptotic meansquare εconsensus is achieved in system (7) for E x 0
< ∞, i ∈ N at
Δ ε = . ρ Proof. Based on these definitions, we obtain x *t + 1 1 = x *t 1 + f 1 (12) for differences between trajectories of systems (7) and D t + 1 = x t + 1 – x *t + 1 1 = x t – γ ( ᏸ ( B t )x t + w t ) + f t – x *t 1 – f 1 = D t – γ ( ᏸ ( B t )D t + w t ) + f t – f 1, since vector 1 is the eigenvector of the Laplacian matrix ᏸ(Bt), which corresponds to a zero eigenvalue as follows: ᏸ(Bt)1 = 0. Then, after adding and subtracting γᏸ(Amax)Dt, we obtain D t + 1 = ( I – γ ᏸ ( A max ) )D t + γ ( ᏸ ( A max )D t – ᏸ ( B t ) )D t + γw t + f t – f 1, where I is the unit matrix with the corresponding dimension. VESTNIK ST. PETERSBURG UNIVERSITY. MATHEMATICS
Vol. 46
No. 3
2013
122
AMELINA i, j
i, j
i, j
Let Ᏺt be the σalgebra of probabilistic events generated by random elements x 0 , w 0 , w 1 , …, w t – 1 , i
i
i
i, j
i
i, j
i, j
f 0 , f 1 , …, f t – 1 , b 0 , b 1 , …, b t , i, j ∈ N, A0, …, At. We consider the conditional expectation of the squared norm Dt + 1 as follows: E Ᏺt D t + 1
2
T
= ( I – γ ᏸ ( A max ) )D t + 2D t ( I – γ ᏸ ( A max ) ) ( γ ( ᏸ ( A max )D t – ᏸ ( B t ) )D t
2
T
T
+ 2D t ( I – γ ᏸ ( B t ) ) ( γE Ᏺt w t + E Ᏺt ( f t – f 1 ) ) + 2γE Ᏺt w t ( f t – f 1 ) T
T
T
(13) 2
+ γ D t ( ᏸ ( A max ) – ᏸ ( B t ) ) ( ᏸ ( A max ) – ᏸ ( B t ) ) + γ E Ᏺt w t + E Ᏺt f t – f 1 . 2
T
2
2
Since condition A2c is fulfilled and ft does not depend on σalgebra Ᏺt, we have 2
2
= E ( f t – f 1 ) = 0,
E Ᏺt f t – f 1
2
E Ᏺt f t – f 1
2
= E ft – f 1
= nσ f .
(14)
i, j
Since condition A2a is fulfilled and w t , i, j ∈ N do not depend on ft, σalgebra Ᏺt are mutually indepen dent, and we obtain
∑b
E Ᏺt
i, j i, i t ( wt
i, j
1
∑b
i, j i, i t ( wt
– w t ) = 0,
i, j
i, j i, i t E ( wt
– w t )E ( f t – f 1 ) = 0
(15)
1
j ∈ Nt
E Ᏺt
i, j i, i t E ( wt
∑b
– wt ) =
j ∈ Nt i, j
∑b
– wt ) ( ft – f 1 ) =
1
i, j
(16)
1
j ∈ Nt
j ∈ Nt
and ⎛ E Ᏺt ⎜ ⎝
∑
j∈
i, j i, i bt ( wt
–
i, j ⎞ w t )⎟
2
⎠
1 Nt
∑ (b
=
j∈
i, j 2 i, i 2 t ) ( E ( wt )
i, j 2
2
+ E ( w t ) ) = 2σ w
1 Nt
∑ (b
j∈
i, j 2 t ) .
(17)
i Nt
Taking into account the derived relations (14)–(17) and with b t indicating the vector composed of components
∑
1, j 2
j∈
1 Nt
E Ᏺt D t + 1
( b t ) , …, 2
∑
n, j 2
j∈
n Nt
( b t ) , we deduce the following from (13): 2
T
= ( I – γ ᏸ ( A max ) )D t + 2D t ( I – γ ᏸ ( A max ) ) ( γ ( ᏸ ( A max )D t – ᏸ ( B t ) )D t T
T
+ γ D t ( ᏸ ( A max ) – ᏸ ( B t ) ) ( ᏸ ( A max ) – ᏸ ( B t ) ) + 2σ w γ b t + nσ f . 2
T
2 2
2
˜
Let Ᏺ t be the σalgebra of probability of events, which occurred before the instant t, generated by all i
i, j
i, j
i, j
i
i
i, j
i
i, j
i, j
random elements x 0 , w 0 , w 1 , …, w t – 1 , f 0 , f 1 , …, f t – 1 , b 0 , b 1 , …, b t – 1 , i, j ∈ N, A0, …, At – 1; we con sider the conditional expectations of both parts of the latter relation. Due to the stochastic properties of
˜
the uncertainties of A2b and the independence of Bt and b t from σalgebra Ᏺ t , we obtain E Ᏺ˜ t D t + 1
2
2
= ( I – γ ᏸ ( A max ) )D t + γ D t QD t + 2σ w γ E Ᏺ˜ t b t + nσ f 2
2
≤ ( 1 – γRe ( λ 2 ( A max ) ) + γ λ max ( Q ) ) D t +
2
= ( 1 – ρ ) Dt +
2 2⎛ 2 2 2σ w γ ⎜ n σ b
⎝
T
2 2
2 2⎛ 2 2 2σ w γ ⎜ n σ b
⎝
n
+
2
n
∑ ∑ (b
i = 1j = 1
i, j 2⎞
2
) ⎟ + nσ f ⎠
2 2 i, j 2⎞ ( b ) ⎟ + nσ f = ( 1 – ρ ) D t + Δ. ⎠ i = 1j = 1 n
+
2
n
∑∑
When we turn to unconditional expectations, we obtain the following estimates: 2
2
E D t + 1 ≤ ( 1 – ρ )E D t + Δ. Inequality (11), which is the first part of conclusion of theorem 1, follows from these estimates based on lemma 1 (see section 2 in [21]). VESTNIK ST. PETERSBURG UNIVERSITY. MATHEMATICS
Vol. 46
No. 3
2013
LOCAL VOTING PROTOCOL FOR DECENTRALIZED LOAD BALANCING
123
Fig. 1. Example of network.
|D(t)| 120 100 80 60 40 20 0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
×104 1.8 2.0 t
Fig. 2. Dependence of the quantity of tasks in queue on time.
Qty of tasks 3.5 ×105 3.0 2.5 2.0 1.5 1.0 0.5 0
0.2
×104 0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0 t
Fig. 3. Average deviations of load from the average level in the network.
The second conclusion on the asymptotic meansquare εconsensus is derived from inequality (11) for t ∞, since it follows from condition A4 that |1 – ρ| < 1 and, therefore, the second item in (11) tends exponentially to zero. 6. SIMULATION MODELING We consider the open queueing network of 1024 agents, the nodes of which are simulated as servers of the queueing system as an example. It is assumed that the average time between events in the input flow is distributed exponentially with the parameter din = 1/3000, and normalized completion times are distrib uted exponentially with the parameter dp = 1 (normalized completion time is the time required for imple menting a task on one node with productivity p = 1). The number of arriving tasks is 106. The selection of the node, which receives a subsequent task, is carried out randomly based on uniform distribution of VESTNIK ST. PETERSBURG UNIVERSITY. MATHEMATICS
Vol. 46
No. 3
2013
124
AMELINA
1024 nodes. Agents are connected in a circle. In addition, n random links are established between agents at each iteration and rearranged over time. An example of such a network is presented in Fig. 1. We consider the case in which tasks are received at different instants within the interval of 1–2000. Typ ical results of simulations, where tasks are sent to random nodes at the moment of arrival, are shown in Figs. 2 and 3. Solid lines correspond to implementation with the redistribution of tasks based on local vot ing protocol, and dashed lines correspond to the case without redistribution. It can be seen that an adap tive multiagent strategy with the redistribution of tasks among linked neighbors is significantly better for handling the distribution of tasks than the strategy without redistribution. CONCLUSIONS The problem of load balancing in a multiagent system under stochastic uncertainties was considered in the paper. Robust local voting protocol was proposed for solving the problem, conditions for achieving approximate load balance in the network were established, and the estimated asymptotic level of the pro tocol’s suboptimality was obtained. Simulation modeling of the algorithm’s application for a computation network was performed. ACKNOWLEDGMENTS The work was supported by the Russian Foundation for Basic Research (project nos. 110801218 and 130700250), Cadres Federal Target Program (government contracts nos. 16.740.11.0042 and 14.740.11.0942), and SPRINT Laboratory of St. Petersburg State University and Intel Corp. REFERENCES 1. M. Huang, “Stochastic approximation for consensus with general timevarying weight matrices,” in Proc. 49th IEEE Conf. on Decision and Control, Atlanta, 2010 (IEEE, 2010), pp. 7449–7454. 2. J. N. Tsitsiklis, D. P. Bertsekas, and M. Athans, “Distributed asynchronous deterministic and stochastic gradi ent optimization algorithms,” IEEE Trans. Autom. Control 31 (9), 803–812 (1986). 3. M. Huang and J. H. Manton, “Coordination and consensus of networked agents with noisy measurements: Sto chastic algorithms and asymptotic behavior,” SIAM J. Control Optim. 48 (1), 134–161 (2009). 4. S. Kar and J. M. F. Moura, “Distributed consensus algorithms in sensor networks with imperfect communica tion: Link failures and channel noise,” IEEE Trans. Sig. Process. 57 (1), 355–369 (2009). 5. T. Li and J.F. Zhang, “Mean square averageconsensus under measurement noises and fixed topologies,” Auto matica 45 (8), 1929–1936 (2009). 6. V. S. Borkar, Stochastic Approximation: A Dynamical Systems Viewpoint (Cambridge University, New York, 2008). 7. A. T. Vakhitov, O. N. Granichin, and L. S. Gurevich, “Algorithm for stochastic approximation with trial input perturbation in the nonstationary problem of optimization,” Avtom. Remote Control 70 (11), 1827–1835 (2009). 8. O. Granichin, L. Gurevich, and A. Vakhitov, “Discretetime minimum tracking based on stochastic approxima tion algorithm with randomized differences,” in Proc. 48th IEEE Conf. on Decision and Control, Shanghai, 2009, (IEEE, 2009), pp. 5763–5767. 9. O. Granichin, A. Vakhitov, and V. Vlasov, “Adaptive control of SISO plant with timevarying coefficients based on random test perturbation,” in Proc. Amer. Control Conf., Baltimore, MD, 2010, pp. 4004–4009. 10. O. N. Granichin, “Stochastic optimization and system programming,” Stokhast. Optim. Inf. 6, 3–44 (2010). 11. A. T. Vakhitov, O. N. Granichin, and M. A. Pan’shenskov, “Methods of data transfer speed estimation in the Data Grid based on linear regression,” Neirokomp. Razrab. Primen., No. 11, 45–52 (2009). 12. N. O. Amelina, “Multiagent technology, adaptation, selforganization, and consensus,” Stokhast. Optim. Inf., No. 7, 149–185 (2011). 13. N. O. Amelina, “Scheduling networks with variable topology in the presence of noise and delays in measure ments,” Vestn. St. Petersb. Univ. Math. 45 (2), 56–60 (2012). 14. N. O. Amelina and A. L. Fradkov, “Approximate consensus in a stochastic dynamical network with incomplete information and delayed measurements,” Avtom. Remote Control 73 (11), 1765–1783 (2012). VESTNIK ST. PETERSBURG UNIVERSITY. MATHEMATICS
Vol. 46
No. 3
2013
LOCAL VOTING PROTOCOL FOR DECENTRALIZED LOAD BALANCING
125
15. N. Amelina, A. Fradkov, and K. Amelin, “Approximate consensus in multiagent stochastic systems with switched topology and noise,” in Proc. of MSC IEEE 2012, Dubrovnik, Croatia (2012), pp. 445–450. 16. K. S. Amelin, N. O. Amelina, O. N. Granichin, and A. V. Koryavko, “Local voting algorithm for consensus problem in decentralized network of intelligent agents,” Neirokomp. Razrab. Primen., No. 11, 039–047 (2012). 17. K. Amelin, N. Amelina, O. Granichin, and O. Granichina, “MultiAgent Stochastic Systems with Switched Topology and Noise,” Proc. of 13th ACIS Int. Conf. on Software Engineering, Artificial Intelligence, Networking, and Parallel/DistributedComputing (SNPD2012), Kyoto, 2012, pp. 438–443. 18. D. P. Derevitskii and F. L. Fradkov, “Two models for analyzing the dynamics of adaptation algorithms,” Avtom. Telemekh., No. 1, 67–75 (1974). 19. L. Wang, Z. Liu, and L. Guo, “Robust Consensus of MultiAgent Systems with Noise,” in Proc. of the 26th Chi nese Control Conf., Zhangjiajie, Hunan, 2007, pp. 737–741. 20. A. L. Fradkov, “ContinuousTime Averaged Models of DiscreteTime Stochastic Systems: Survey and Open Problems,” in Proc. 50th IEEE Conf. on Decision and Control and European Control Conf. (CDCECC) (2011), pp. 2076–2081. 21. B. T. Polyak, Introduction to Optimization (Nauka, Moscow, 1983; New York, 1987).
VESTNIK ST. PETERSBURG UNIVERSITY. MATHEMATICS
Vol. 46
No. 3
2013