Recurrent Neural Networks Training With Stable ... - Semantic Scholar

5 downloads 0 Views 549KB Size Report
Recurrent Neural Networks Training With Stable .... for stable training of recurrent neural networks. ... is a stable matrix. .... Moreover, evaluating a value of ..... Circuits Syst. I, Fund. Theory Appl., vol. 49, no. 9, pp. 1376–1381, Sep. 2002.
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009

983

Recurrent Neural Networks Training With Stable Bounding Ellipsoid Algorithm Wen Yu, Senior Member, IEEE, and José de Jesús Rubio

Abstract—Bounding ellipsoid (BE) algorithms offer an attractive alternative to traditional training algorithms for neural networks, for example, backpropagation and least squares methods. The benefits include high computational efficiency and fast convergence speed. In this paper, we propose an ellipsoid propagation algorithm to train the weights of recurrent neural networks for nonlinear systems identification. Both hidden layers and output layers can be updated. The stability of the BE algorithm is proven. Index Terms—Bounding ellipsoid (BE), identification, recurrent neural networks.

I. INTRODUCTION ECENT results show that neural network techniques seem to be effective to identify a broad category of complex nonlinear systems, when complete model information cannot be obtained. Neural networks can be classified as feedforward and recurrent ones [8]. Feedforward networks, for example multilayer perceptrons, are implemented to approximate nonlinear functions. The main drawback of these neural networks is that the weights’ updating does not utilize information on the local data structure and the function approximation is sensitive to the training data [17]. Since recurrent networks incorporate feedback, they have powerful representation capabilities and can successfully overcome the disadvantages of feedforward networks [13]. Even though backpropagation has been widely used as a practical training method for neural networks, there are some limitations such as slow convergence, local minima, and sensitivity to noise. In order to overcome these problems, many methods for neural identification, filtering, and training have been proposed, for example, Levenberg–Marquardt, momentum algorithms [15], extended Kalman filter [23], and least squares approaches [17], which can speed up the backpropagation training. Most of them use static structures. There are some special restrictions for recurrent structure. In [2], the output layer must be linear and the hidden-layer weights are chosen randomly. The extended Kalman filter with decoupling structure has fast convergence speed [22], however the computational complexity

R

Manuscript received December 03, 2007; revised July 17, 2008 and October 21, 2008; accepted December 18, 2008. First published May 15, 2009; current version published June 03, 2009. W. Yu is with the Departamento de Control Automático, CINVESTAV-IPN, México D.F. 07360, México (e-mail: [email protected]). J. de Jesús Rubio is with the Sección de Estudios de Posgrado e Investigación, Instituto Politécnico Nacional-ESIME Azcapotzalco, Col.Sta. Catarina, México D.F. 07320, México. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNN.2009.2015079

in each interaction is increased. Decoupled Kalman filter with diagonal matrix [19] is similar to the gradient algorithm, so the convergence speed cannot be increased. A main drawback of the Kalman filter training is that theoretical analysis requires the uncertainty of neural modeling to be Gaussian process. In 1979, Khachiyan indicated how an ellipsoid method for linear programming can be implemented in polynomial time [1]. This result caused great excitement and stimulated a flood of technical papers. The ellipsoid technique is a helpful tool in state estimation of dynamic systems with bounded disturbances [5]. There are many potential applications to problems outside the domain of linear programming. Weyer and Campi [27] obtained confidence ellipsoids which are valid for a finite number of data points, whereas Ros et al. [20] presented an ellipsoid propagation such that the new ellipsoid satisfies an affine relation with another ellipsoid. In [3], the ellipsoid algorithm is used as an optimization technique that takes into account the constraints on cluster coefficients. Lorenz and Boyd [14] described in detail several methods that can be used to derive an appropriate uncertainty ellipsoid for the array response. In [16], the problem concerning asymptotic behavior of ellipsoid estimates is considered for linear discrete-time systems. There are few application of ellipsoid on neural networks. In [4], unsupervised and supervised learning laws in the form of ellipsoids are used to find and tune the fuzzy function rules. In [12], ellipsoid type of activation function is proposed for feedforward neural networks. In [10], multiweight optimization for bounding ellipsoid (BE) algorithms is introduced. In [6], a simple adaptive algorithm is proposed that estimates the magnitude of noise. They are based on two operations of ellipsoid calculus: summation and intersection which correspond to the prediction and correction phase of the recursive state estimation problem, respectively. In [21], we used the BE algorithm to train recurrent neural networks. But the training algorithm does not have standard recurrent form, so theory analysis cannot be implemented. In this paper, we modify the above algorithm and analyze the stability of nonlinear system identification. To the best of our knowledge, neural network training and stability analysis with the ellipsoid or the BE algorithm has not yet been established in the literature, and this is the first paper to successfully apply the BE algorithm for stable training of recurrent neural networks. In this paper, the BE algorithm is modified to train the weights of a recurrent neural network for nonlinear system identification. Both hidden layers and output layers can be updated. Stability analysis of identification error with the BE algorithm is given by a Lyapunov-like technique.

1045-9227/$25.00 © 2009 IEEE

984

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009

II. RECURRENT NEURAL NETWORKS TRAINING WITH BE ALGORITHM

is a known initial constant weight, is where with respect the derivative of nonlinear activation function and is the remainder of the Lipschitz form (5). to Also

Consider following discrete-time nonlinear system: (1) where is a state vector, is an input vector, is the upper bound of and and are known. is an unknown nonlinear smooth vector-valued function . We use the following seriesparallel [15] recurrent neural network to identify the nonlinear plant (1):

(2) represents the state of the neural network. The is a stable matrix. The weights in an output , the weights in a hidden layer are is -dimensional vector function, and is a diagonal matrix

where matrix layer are

(6) is the derivative of nonlinear activation function where with respect to and is the remainder of the Lipschitz form (6). From [18, Lemma 12.5], and are bounded, and are when the functions bounded. So we have

(7) is a known initial constant weight, where . Similarly

(8) . When as a diagonal matrix, where substituting (7) and (8) into the plant (4), we have the following single-output form: (9) (3) where

and and

are sigmoid functions. The expressions on the right-hand side of (2) can and , respectively, in which case also be it is called parallel model [15]. By using our previous results in [28], the parallel model training has similar results as the series-parallel model (2). According to the Stone–Weierstrass theorem [13], the unknown nonlinear system (1) can be written in the following form:

(4) where represents unmodeled dynamics. By [13], we know can be made arbitrarily small by simply sethat the term lecting appropriate number of the hidden neurons. and are differentiable, Because the sigmoid functions based on [18, Lemma 12.5], we conclude that

(5)

where the output is

is th element of the vector The unmodeled dynamics are defined as

. (10)

the parameter is , the data is and . The output of the recurrent neural network (1) is (11) We define the training error as (12) The identification error neural network (2) is

between the plant (1) and the (13)

YU AND DE JESÚS RUBIO: RECURRENT NEURAL NETWORKS TRAINING WITH STABLE BOUNDING ELLIPSOID ALGORITHM

985

ellipsoid intersection will not increase. Now using the ellipsoid definition for the neural identification, we define the parameter as error ellipsoid (14) where is the unknown optimal weight in (9), and . that minimizes the modeling error In this paper, we use the following two assumptions. belongs to an ellipsoid A1. It is assumed that set (15)

Fig. 1. An ellipsoid.

. where are the known positive constants, is bounded The assumption A1 requires that by . In this paper, we discuss the open-loop identification, and we assume that the plant (1) is bounded-input–bounded-output and in (1) are bounded. Since (BIBO) stable, i.e., and are bounded, all of data in are bounded, so is bounded. If

Fig. 2. Ellipsoid intersection of two ellipsoid sets.

Now we use the BE algorithm to train the recurrent neural netis bounded. work (2) such that the identification error Definition 1: A real -dimensional ellipsoid set, centered on , can be described as

where volume of

is a positive-definite symmetric matrix. The is defined as in [5] and [20]

where is a constant that represents the volume of the unit ball . in The orientation (direction of axis) of the ellipsoid is deof , and the lengths termined by the eigenvectors of the semimajor axes of are determined by the eigenvalues of . A 2-D ellipsoid is shown in Fig. 1. Definition 2: The ellipsoid intersection of two ellipsoid sets and is another ellipsoid set [25], defined as

where and and are positive definite symmetric matrices. is The normal intersection of the two ellipsoid sets not an ellipsoid set in general. The ellipsoid set contains the . Fig. 2 normal intersection of ellipsoid sets [25], shows this idea. There exists a minimal volume ellipsoid corresponding to , called the optimal bounding ellipsoid (OBE); see [11], [20], and [25]. In this paper, we will not try to find , but we will design an algorithm such that the volume of the new

By Definition 1, the common center of the sets , is , so . Finding is an intractable task since the amount of information in (15) grows linearly in . in (14) involves the soMoreover, evaluating a value of lution of th-order inequalities for (15). A2. It is assumed that the initial weight errors are inside an ellipsoid (16) where and are the unknown optimal weights. The assumption A2 requires that the initial weights of the neural networks be bounded. It can be satisfied by and . From the definition of choosing suitable in (14), the common center of the sets , is , so . By (14) and (15), the ellipsoid intersection satisfies (17) Thus, the problem of identification is to find a minimum set of , which satisfies (14). We will construct a recursive identifiis a BE set if is a BE set. cation algorithm such that The next theorem shows the propagation process of these ellipsoids.

986

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009

Theorem 1: If in (14) is an ellipsoid set, we use the foland : lowing recursive algorithm to update

Substituting (21) into (22), it gives

(18) is a given diagonal positive-definite matrix, and where and positive constant which is selected such that Then is an ellipsoid set and satisfies

(23)

is a .

By the intersection property (17) of the ellipsoid sets, we have

(19)

So (23) becomes

where and is given in (12). Proof: First, we apply the matrix inversion lemma [7] to by (18) calculate

Now we use as in (12). The second term of the above equation can be calculated as Since

where specifically,

, and

denote matrices of the correct size, and , and

so

, it gives From (21), we know . Because so

,

(20) so (21) where

. Now we calculate . By (18), we have and

so (24) Equation (19) is established. Since

and (25)

(22)

is an ellipsoid set.

YU AND DE JESÚS RUBIO: RECURRENT NEURAL NETWORKS TRAINING WITH STABLE BOUNDING ELLIPSOID ALGORITHM

987

Remark 1: The algorithm (18) has a scalar form, which is for each subsystem. This method can decrease the computational burden when we estimate the weights of the recurrent neural network. A similar idea can be found in [8] and [19]. For each and in (18), we have element of

(26) If does not change as , it becomes the backpropagation algorithm [8]. The time-varying gain in the BE algorithm may speed up the training process. The BE algorithm (18) has the similar structure as the extended Kalman filter training algorithm [22], [23], [26]

(27) where can be chosen as , where is small and positive, is the covariance of the process noise. When , and it becomes the least square algorithm [7]. If and , (27) is the BE algorithm (18). But there is a big difference: the BE algorithm is for deterministic case and the extended Kalman filter is for stochastic case. and is (17), which is also The ellipsoid intersection of ellipsoid, defined as

where is a vector variable and is the center of . We cannot assure that the center of the ellipsoid intersection is also , but since the centers of and are , we can . From (19) and (18), we know guarantee that is inside

Fig. 3. Convergence of the intersection 5 .

The following steps show how to train the weights of recurrent neural networks with the BE algorithm. 1) Construct a recurrent neural networks model (2) to identify an unknown nonlinear system (1). The matrix is selected such that it is stable. 2) Rewrite the neural network in linear form

3) Train the weights as

4)

is changed as the BE algorithm

III. STABILITY ANALYSIS where

and . , i.e.,

If , then

. So , and the volume of satisfies

The volume of is less than the volume of when the and . modeling error is not small and will converge to the set when Thus, the set . This means that when the modeling error is bigger than the unmodeled dynamic will converge to the set ; see Fig. 3.

Theorem 1 tells us that is a BE if A2 is satisfied. So the weights of the neural networks are bounded with the training algorithm (18). The next theorem gives the bound of the identification error. Theorem 2: If we use the neural network (2) to identify the unknown nonlinear plant (1) with the training algorithm (18), is bounded, and the normalthen the identification error ization of the training error converges to the residual set (28) where

.

988

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009

Proof: We define the following Lyapunov function:

is minimized. When is bounded, , we know (30) and

is also bounded. From

(29) Evaluating

(32)

as Summarize (32) from

to

by (24), we have , so Since

is constant and (33)

From Theorem 1, we know is a BE set, so

. We define . So (30)

where

It is (28). Remark 2: Even if the parameters converge to their optimal values with the training algorithm (18), from (4), we know that there always exists unmodeled dynamics (structure error). So . we cannot reach

. Because

IV. SIMULATIONS The nonlinear plant to be identified is expressed as [15], [24]

where and are -functions, is an -function of and and is a -function of , so admits the smooth input-to-state (ISS)-Lyapunov function as in [9], the dynamic of the training error is input-to-state stable. The “INPUT” corresponds to the second term of the last line in (30). The “STATE” corresponds to the first term of the last line in (30), i.e., the training error . Because the “INPUT” is bounded and the dynamic is ISS, the “STATE” is bounded. is not the same as the identification The training error , but they are minimized at the same error time. From (2), (4), (9), and (12), we have (31) where

. By the relation

(34) This input–output model can be transformed into the following state–space model

(35) This unknown nonlinear system has the standard form (1). We use the recurrent neural network given in (2) to identify it, where and . , which is a stable diagonal matrix. We select The neural identifier (2) can be written in the form of (36) Here

because

Since

is a constant, the minimization of the training error means the upper bound of the identification error

is nonlinear part. Dynamics of the linear part is determined by the eigenvalues of . In this can assure both stability and fast example, we found that response for the dynamic neural network (36). Model complexity is important in the context of system identification, which corresponds to the hidden nodes of the neuromodel. In order to get higher accuracy, we should use more hidden nodes. In [15], the static neural networks needed 20 hidden nodes for this example. For this simulation, we try to test different numbers of hidden nodes, and we find that with

YU AND DE JESÚS RUBIO: RECURRENT NEURAL NETWORKS TRAINING WITH STABLE BOUNDING ELLIPSOID ALGORITHM

989

Fig. 4. Identification errors of backpropagation and BE. Fig. 6. Identification errors of backpropagation and BE with a bounded perturbation.

that after the backpropagation algorithm becomes unstable. If we define the mean squared error for finite time as

Fig. 5. Identification errors of Kalman filter and BE.

more than three hidden nodes, the identification accuracy will not improve much. So we use three nodes in the hidden layer, i.e., , and . The initial weights , are chosen in random in . The input is and

(37) From Theorem 1, we know that the initial condition for should be large, and we select . , and corresponds to the learning Theorem 1 requires rate in (18). The bigger is, the faster is the training algorithm, but it is less robust. In this example, we found that is , i.e., satisfied. Theorem 1 also requires . It is the upper bound of the modeling error as in (15), and from (18), we see that also decides the learning rate. For this example, we found that is a good choice. We compare the BE training algorithm (18) with the standard backpropagation algorithm (26), and the learning rate for . In this simulation, we found the backpropagation is

then the comparison results for the identification error are shown in Fig. 4. in the BE When the time-varying learning rate , the training algorithm (18) is constant, e.g., BE training becomes backpropagation. The updating steps in the BE training are variable to guarantee the stability. Also when is large, so this BE algorithm performs much better than backpropagation. Extended Kalman filter training algorithms [22], [23], [26] are also effective when disturbances are white noise or small bounded noise, which is

(38) We choose and . The comparison results for the identification error are shown in Fig. 5. Now we repeat the above simulations with a bounded perturin the input. This input–output model can be written bation as

(39)

990

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 6, JUNE 2009

Fig. 7. Identification errors of Kalman filter and BE with a bounded perturbation.

where is a uniform random noise. When it is bounded by 0.02, the comparisons between the BE algorithm and backpropagation are shown in Fig. 6. When it is bounded by 0.2, the comparisons between the BE algorithm and extended Kalman filter are shown in Fig. 7. They show that the BE technique does better than both backpropagation and extended Kalman filter in the presence of non-Gaussian noise. in the extended Kalman filter (38) and The leaning rate the BE (18) are similar. Both of them have fast convergence speeds. The extended Kalman filter requires that the external disturbances be white noises. On the other hand, the noises in the BE training are required to be bounded. When the disturbances are big, the bounded ellipsoid training proposed in this paper has less steady error than the extended Kalman filter algorithm. Theorem 1 gives necessary conditions of and for stable and . In this example, we found that learning, i.e., and , the learning process becomes unstable. if V. CONCLUSION In this paper, a novel training method for recurrent neural network is proposed, and the BE algorithm is modified for neural identification. Ellipsoid intersection and ellipsoid volume are introduced to explain the physical meaning of the proposed training algorithms. Both hidden layers and output layers of the recurrent neural networks can be updated. Lyapunov-like technique is used to prove that the ellipsoid intersection can be propagated and the BE algorithm is stable. The proposed concept can be extended for feedforward neural networks. The BE algorithm may also be applied to nonlinear adaptive control, fault detection and diagnostics, performance analysis of dynamic systems and time series, and forecasting. REFERENCES [1] R. G. Bland, D. Goldfarb, and M. J. Todd, “The ellipsoid method: A survey,” Oper. Res., vol. 29, pp. 1039–1091, 1981.

[2] F. N. Chowdhury, “A new approach to real-time training of dynamic neural networks,” Int. J. Adapt. Control Signal Process., vol. 31, pp. 509–521, 2003. [3] M. V. Correa, L. A. Aguirre, and R. R. Saldanha, “Using steady-state prior knowledge to constrain parameter estimates in nonlinear system identification,” IEEE Trans. Circuits Syst. I, Fund. Theory Appl., vol. 49, no. 9, pp. 1376–1381, Sep. 2002. [4] J. A. Dickerson and B. Kosko, “Fuzzy function approximation with ellipsoid rules,” IEEE Trans. Syst. Man Cybern. B. Cybern., vol. 26, no. 4, pp. 542–560, Aug. 1996. [5] E. Fogel and Y. F. Huang, “On the value of information in system identification: Bounded noise case,” Automatica, vol. 18, no. 2, pp. 229–238, 1982. [6] S. Gazor and K. Shahtalebi, “A new NLMS algorithm for slow noise magnitude variation,” IEEE Signal Process. Lett., vol. 9, no. 11, pp. 348–351, Nov. 2002. [7] G. C. Goodwin and K. Sang Sin, Adaptive Filtering Prediction and Control. Englewood Cliffs, NJ: Prentice-Hall, 1984. [8] S. Haykin, Neural Networks-A Comprehensive Foundation. New York: Macmillan, 1994. [9] Z. P. Jiang and Y. Wang, “Input-to-state stability for discrete-time nonlinear systems,” Automatica, vol. 37, no. 2, pp. 857–869, 2001. [10] D. Joachim and J. R. Deller, “Multiweight optimization in optimal bounding ellipsoid algorithms,” IEEE Trans. Signal Process., vol. 54, no. 2, pp. 679–690, Feb. 2006. [11] S. Kapoor, S. Gollamudi, S. Nagaraj, and Y. F. Huang, “Tracking of time-varying parameters using optimal bounding ellipsoid algorithms,” in Proc. 34th Allerton Conf. Commun. Control Comput., Monticello, IL, 1996, pp. 1–10. [12] N. S. Kayuri and V. Vienkatasubramanian, “Representing bounded fault classes using neural networks with ellipsoid activation functions,” Comput. Chem. Eng., vol. 17, no. 2, pp. 139–163, 1993. [13] E. B. Kosmatopoulos, M. M. Polycarpou, M. A. Christodoulou, and P. A. Ioannou, “High-order neural network structures for identification of dynamic systems,” IEEE Trans. Neural Netw., vol. 6, no. 2, pp. 422–431, Mar. 1995. [14] R. G. Lorenz and S. P. Boyd, “Robust minimum variance beam-forming,” IEEE Trans. Signal Process., vol. 53, no. 5, pp. 1684–1696, May 2005. [15] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamic systems using neural networks,” IEEE Trans. Neural Netw., vol. 1, no. 1, pp. 4–27, Mar. 1990. [16] S. A. Nazin and B. T. Polyak, “Limiting behavior of bounding ellipsoid for state estimation,” in Proc. 5th IFAC Symp. Nonlinear Control Syst., St. Petersburg, Russia, 2001, pp. 585–589. [17] A. G. Parlos, S. K. Menon, and A. F. Atiya, “An algorithm approach to adaptive state filtering using recurrent neural network,” IEEE Trans. Neural Netw., vol. 12, no. 6, pp. 1411–1432, Nov. 2001. [18] A. S. Poznyak, E. N. Sanchez, and W. Yu, Differential Neural Networks for Robust Nonlinear Control. Singapore: World Scientific, 2001. [19] G. V. Puskorius and L. A. Feldkamp, “Neurocontrol of nonlinear dynamic systems with Kalman filter trained recurrent networks,” IEEE Trans. Neural Netw., vol. 5, no. 2, pp. 279–297, Mar. 1994. [20] L. Ros, A. Sabater, and F. Thomas, “An ellipsoid calculus based on propagation and fusion,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 32, no. 4, pp. 430–442, Aug. 2002. [21] J. J. Rubio and W. Yu, “Neural networks training with optimal bounded ellipsoid algorithm,” in Advances in Neural Networks-ISNN 2007, ser. Lecture Notes in Computer Science 4491. Berlin, germany: SpringerVerlgag, 2007, pp. 1173–1182. [22] J. J. Rubio and W. Yu, “Nonlinear system identification with recurrent neural networks and dead-zone Kalman filter algorithm,” Neurocomputing, vol. 70, no. 13, pp. 2460–2466, 2007. [23] D. W. Ruck, S. K. Rogers, M. Kabrisky, P. S. Maybeck, and M. E. Oxley, “Comparative analysis of backpropagation and the extended Kalman filter for training multilayer perceptrons,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 6, pp. 686–691, Jun. 1992. [24] P. S. Sastry, G. Santharam, and K. P. Unnikrishnan, “Memory neural networks for identification and control of dynamic systems,” IEEE Trans. Neural Netw., vol. 5, no. 2, pp. 306–319, Mar. 1994. [25] F. C. Schweppe, Uncertain Dynamic Systems. Englewood Cliffs, NJ: Prentice-Hall, 1973. [26] S. Singhal and L. Wu, “Training multilayer perceptrons with the extended Kalman algorithm,” Adv. Neural Inf. Process. Syst. I, pp. 133–140, 1989.

YU AND DE JESÚS RUBIO: RECURRENT NEURAL NETWORKS TRAINING WITH STABLE BOUNDING ELLIPSOID ALGORITHM

[27] E. Weyer and M. C. Campi, “Non-asymptotic confidence ellipsoids for the least squares estimate,” in Proc. 39th IEEE Conf. Decision Control, Sydney, Australia, 2000, pp. 2688–2693. [28] W. Yu, “Nonlinear system identification using discrete-time recurrent neural networks with stable learning algorithms,” Inf. Sci., vol. 158, no. 1, pp. 131–147, 2002.

Wen Yu (M’97–SM’04) received the B.S. degree in electrical engineering from Tsinghua University, Beijing, China, in 1990 and the M.S. and Ph.D. degrees in electrical engineering from Northeastern University, Shenyang, China, in 1992 and 1995, respectively. From 1995 to 1996, he served as a Lecturer at the Department of Automatic Control, Northeastern University. In 1996, he joined CINVESTAV-IPN, México, where he is currently a Professor at the Departamento de Control Automático. He also held a research position with the Instituto Mexicano del Petróleo, from December 2002 to November 2003. Since October 2006, he has been a senior visiting research fellow at Queen’s University Belfast. He also held a visiting profes-

991

sorship at Northeastern University in China from 2006 to 2008. His research interests include adaptive control, neural networks, and fuzzy control. Dr. Yu serves as an Associate Editor of Neurocomputing and the International Journal of Modelling, Identification and Control. He is a member of the Mexican Academy of Science.

José de Jesús Rubio was born in México City in 1979. He graduated in electronic engineering from the Instituto Politecnico Nacional, México, in 2001. He received the M.S. and Ph.D. degrees in automatic control from CINVESTAV IPN, México, in 2004 and 2007, respectively. He was a full time Professor in the Autonomous Metropolitan University, Mexico City, Mexico, from 2006 to 2008. Since 2008, he has been a full time Professor at the Instituto Politecnico Nacional, ESIME Azcapotzalco, Mexico. He has published four chapters in international books and ten papers in international magazines and he has presented more than 20 papers in international conferences. He is a member of the adaptive fuzzy systems task force. His research interests are primarily focused on evolving intelligent systems, nonlinear and adaptive control systems, neural-fuzzy systems, mechatronic, robotic, and delayed systems.

Suggest Documents