Intelligent Systems Technologies and Applications

1 downloads 0 Views 4MB Size Report
A static neural network is introduced .... denotes a block diagonal matrix. Rn and m n ..... 16. X. Lou and B. Cui for i = 1, 2 and k = 1, where. 1 0. 1.5 1.3. 0.5 0.3. , and . 0 1 ..... ij b are connection weight coefficients of the neural network, time delays ...... mainly due to the dead zone actuator and friction used in the experiment.
International Journal of

Intelligent Systems Technologies and Applications Volume 6, Nos. 1/2, 2009 Publisher’s website: www.inderscience.com E-mail: [email protected] ISSN (Print) : 1740 8865 ISSN (Online): 1740-8873

Copyright© Inderscience Enterprises Ltd No part of this publication may be reproduced stored or transmitted in any material form or by any means (including electronic, mechanical, photocopying, recording or otherwise) without the prior written permission of the publisher, except in accordance with the provisions of the Copyright Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd or the Copyright Clearance Center Inc. Published and typeset in the UK by Inderscience Enterprises Ltd

Intelligent systems refer broadly to computer embedded or controlled systems, machines and devices that possess a certain degree of intelligence. The International Journal of Intelligent Systems Technologies and Applications (IJISTA) seeks to publish original papers featuring innovative and practical technologies related to the design and development of intelligent systems. It also carries articles on intelligent systems applications in areas such as manufacturing, bioengineering, agriculture, services, home automation and appliances, medical robots and robotic rehabilitations, space exploration, etc. IJISTA aims at providing an international framework for researchers working from perspectives of different technical disciplines to discuss technological and scientific problems related to the design, development and applications of intelligent devices, machines and systems. IJISTA helps academics, researchers, engineers and technologists, working in the broad areas of intelligent systems design and development related technologies including robotics, mechatronics, artificial intelligence, computer engineering, electronics, advanced computing and modelling, as well as their applications in manufacturing, bioengineering, agriculture, horticulture, space and medical fields, to disseminate information and to learn from each other's work. IJISTA publishes original papers detailing exposition of new methodologies or techniques, state-of-the-art review papers, and brief papers discussing new technical concepts or developments, or new applications of existing techniques. Special Issues devoted to important topics in the broad context of intelligent systems will occasionally be published. Subject Coverage ƒ Robotics and mechatronics technologies ƒ Artificial intelligence and knowledge based systems technologies ƒ Real-time computing and its algorithms ƒ Embedded systems technologies ƒ Actuators and sensors ƒ Mico/nano technologies ƒ Sensing and multiple sensor fusion ƒ Machine vision, image processing, pattern recognition and speech recognition and synthesis ƒ Motion/force sensing and control ƒ Intelligent product design, configuration and evaluation ƒ Real time learning and machine behaviours

ƒ Fault detection, fault analysis and diagnostics ƒ Digital communications and mobile computing ƒ CAD and object oriented simulations Submission of papers Papers, case studies, etc., in the areas covered by the International Journal of Intelligent Systems Technologies and Applications are invited for submission. Authors may wish to send abstracts of proposed papers in advance. Notes for intending authors, more detailed guidance and sample papers are available on the website: https://www.inderscience.com/papers/about.php Authors of accepted papers will receive a PDF file of their published paper. Hard copies of journal issues may be purchased at a special price for authors from [email protected] Papers, with a submission letter, should be emailed to the Regional Editor Dr Peter Xu. Email: [email protected] All editorial correspondence (but not subscription orders) should be emailed to the IEL Editorial Office: Email: [email protected] Fax: (UK) +44 1234-240515 Website: www.inderscience.com Neither the Editor-in-Chief, the Editors, nor the publisher can accept any responsibility for opinions expressed in the International Journal of Intelligent Systems Technologies and Applications nor in any of its special publications. Subscription orders The International Journal of Intelligent Systems Technologies and Applications (IJISTA) is published in eight issues per volume. A Subscription Order Form is provided in this issue. Payment with order should be made to: Inderscience Enterprises Ltd. (Order Dept.), World Trade Centre Building 11, 29 Route de Pre-Bois, Case Postale 856, CH-1215 Genève 15, Switzerland. For rush orders please FAX to: (UK) +44 1234 240 515 or Email to [email protected] Electronic PDF files IJISTA papers are available to download from web site: www.inderscience.com Online payment by credit card. Advertisements Please address enquiries to the abovementioned Geneva address, or Email: [email protected]

RECENT ADVANCES IN DYNAMIC MODELLING, CONTROL AND APPLICATIONS OF NEURAL NETWORKS

Guest Editors: Professor Huaguang Zhang School of Information Science and Engineering, Northeastern University, Shenyang 110004, PR China E-mail: [email protected]

Professor Shuzhi Sam Ge Department of Electrical and Computer Engineering, The National University of Singapore, Singapore 117576 E-mail: [email protected]

Published by

Inderscience Enterprises Ltd

IJISTA SUBSCRIPTION ORDER FORM Volumes 6 and 7, 2009 (THIS FORM MAY BE PHOTOCOPIED) Subscription price and ordering information: The International Journal of Intelligent Systems Technologies and Applications (IJISTA) is published eight times a year (in two volumes of four issues), in English. Subscription for hard copy OR on-line format (one simultaneous user only) € 735 per annum (including postage and handling). Subscription for hard copy AND on-line format (one simultaneous user only) € 1,025 Airmail option € 60 per annum extra. Prices for multi-simultaneous users are available on request. Subscription orders should be addressed to the publishers: Inderscience Enterprises Ltd (Order Dept.), World Trade Centre Building 11, 29 Route de Pre-Bois, Case Postale 856, CH-1215 Genève 15, Switzerland. x

Payment with order: Cheques or bankers drafts should be sent with order, made payable to: Inderscience Enterprises Ltd. Credit card payments will be accepted and will be converted to £ Sterling at the prevailing rates. For rush orders, contact: Fax: (UK) +44 1234 240 515 Website: www.inderscience.com or Email to [email protected]

x

Please enter my subscription to the International Journal of Intelligent Systems Technologies and Applications

x x

• • • •

• •

subscriptions to Volumes 6 and 7, 2009 €………………. Please dispatch my order by air mail (add € 60 per annum): € …….. I enclose total payment of ………………. € Name of Subscriber.........................................................................................………. Position............................................................................................................………. Company/Institution........................................................................................………. Address.............................................................................................................………. ............................................................................................................………………… ............................................................................................................………………… Fax ...............................................Email .......................................................…...…… Date..........................................….Signature ..................................................….…….

I wish to pay by credit card………………………………..…………….……………….. x I authorise you to debit my account with the amount in GBP sterling equivalent to € ...................……… x Three digit security number (on reverse of card)………… x Card No. ...................................................Expiry Date..................................………. Signature..................................................Date..............................................…….…. Please tick if you would like details of other Inderscience publications

Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, 2009

Contents SPECIAL ISSUE: RECENT ADVANCES IN DYNAMIC MODELLING, CONTROL AND APPLICATIONS OF NEURAL NETWORKS Guest Editors: Prof. Huaguang Zhang and Prof. Shuzhi Sam Ge 1

Editorial Huaguang Zhang and Shuzhi Sam Ge

5

Stabilisation of Cellular Neural Networks with time-varying delays and reaction–diffusion terms Xuyang Lou and Baotong Cui

22

Global stability of a class of Cohen–Grossberg neural networks with delays Zhanshan Wang, Jian Feng and Gang Chen

50

A novel Artificial Neural Network training method combined with Quantum Computational Multi-Agent System theory Xiangping Meng, Jianzhong Wang, Yuzhen Pi and Quande Yuan

61

Field Programmable Gate Array based floating point hardware design of recursive k-means clustering algorithm for Radial Basis Function Neural Network S.P. Joy Vasantha Rani, P. Kanagasabapathy and L. Suganthi

77

On-line security monitoring and analysis using Levenberg-Marquardt algorithm-based Neural Network Seema N. Pandey, Shashikala Tapaswi and Laxmi Srivastava

89

Application of Neural Network approach for Proton Exchange Membrane fuel cell systems Mustapha Hatti and Mustapha Tioursi

112

Sensorless anti-swing control of automatic gantry crane using Dynamic Recurrent Neural Network-based soft sensor Mahmud Iwan Solihin and Wahyudi

128

Robust adaptive tracking controller design for non-affine non-linear systems with state time-varying delay and unknown dead-zone Luo Yan-Hong and Wei Qing-Lai

Contents Additional papers 144

SmartCon: a context-aware service discovery and selection mechanism using Artificial Neural Networks Eyhab Al-Masri and Qusay H. Mahmoud

157

A framework for evaluating Knowledge Management Systems based on Balanced Scorecard Su-Yeon Kim and Hyun-Seok Hwang

166

A simple modelling of complex environments for mobile robots Jun Miura and Suguru Ikeda

178

Research on vehicle chassis system using Layered Coordinated Control strategy Wuwei Chen and Changbao Chu

International Journal of Intelligent Systems Technologies and Applications (IJISTA) Editor-in-Chief: Dr. M.A. Dorgham International Centre for Technology and Management, UK Email: [email protected] Editor: Professor Peter Xu Massey University, School of Engineering & Advanced Technology Private Bag 102 904, North Shore Mail Centre, Auckland, New Zealand Email: [email protected] Members of the Editorial Board Professor Glen Bright Professor Madan M. Gupta Department of Mechanical Engineering Director University of Natal, Durban, South Africa Intelligent Systems Research Laboratory University of Saskatchewan Professor Shyi-Ming Chen Saskatoon, SK, S7N 5A9, Canada Department of Computer Science and Professor Zeng-Guang Hou Information Engineering Lab of Complex Systems and Intelligence National Taiwan University of Science and Science, Institute of Automation, Technology, 43, Section 4 Chinese Academy of Sciences Keelung Road, Taipei 106, Taiwan PO Box 2728, Beijing 100080, China Professor Seung-Bok Choi Smart Structures and Systems Laboratory Department of Mechanical Engineering, Inha University, Incheon 402-751, Korea Professor Serge Demidenko Chair of Electrical & Computer Systems Engineering Deputy Head of School of Engineering Monash University (Malaysia Campus) 2 Jalan Kolej, 46150 Petaling Jaya, Selangor, Malaysia Professor Yongtae Do School of Electronic Engineering Daegu University, Naeri 15, Jinryang Kyunsan-City, Kyungpook, 712-714, Korea Professor Clara Fang College of Engineering, Technology and Architecture, University of Hartford 200 Bloomington Ave West Hartford, CT 06117, USA Dr. Christian H Fedrowitz KUKA Schweissanlagen GmbH Bluecherstrasse 144 D-86163 Augsburg, Germany Professor Shuzhi Sam Ge Department of Electrical & Computer Engineering The National University of Singapore Singapore 117576

Professor Wisama Khalil Ecole Centrale de Nantes Institut de Recherche en Communication et Cybernétique de Nantes, 1 rue de la Noë BP 92 101, 44321 Nantes cedex 03, France Professor K.-D. Kuhnert University of Siegen Institute for Real-Time-Learning Systems Hölderlinstrasse 3, D-57068 Siegen, Germany Professor Wei Li Department of Computer Science California State University, Bakersfield Bakersfield, CA 93311, USA Professor Marios Polycarpou University of Cyprus, 75 Kallipoleos P.O. Box 20537, CY-1678 Nicosia, Cyprus Professor Chun-Yi Su Department of Mechanical and Industrial Engineering, Concordia University 1455 de Maisonneuve Blvd. W Montreal, Quebec H3G 1M8, Canada Dr Yong J Yuan Principal Investigator Industrial Research Limited Crown Research Institutes, PO Box 31-310 Lower Hutt 6009, New Zealand Dr Ruili Wang Institute of Information Sciences and Technology, Massey University Palmerston North, New Zealand

CALL FOR PAPERS International Journal of

Mobile Network Design and Innovation (IJMNDI) Website: www.inderscience.com ISSN (Print): 1744-2869 ISSN (Online): 1744-2850 Description The International Journal for Mobile Network Design and Innovation addresses the state-of-the-art in computerisation for the deployment and operation of current and future wireless networks. Objectives The journal’s objective is to found, develop and promote a cutting edge arena for the emergent field of intelligent computerised wireless network design and management. The role and contribution of this research will influence economic, operational and policy issues. It aims to expand the frontiers of fundamental computational techniques and enhance prospects for effective high performance software solutions. The journal seeks to foster the international exchange of ideas and experiences among researchers, educators and professionals in the field, on a global scale. It also serves as a venue for articles evaluating the state-of-the-art of computer applications in this area. Readership The journal will be targeted toward professionals, researchers and educators in the interdisciplinary fields contributing to the development of methodologies and software systems for obtaining high performance wireless networks. Content The journal seeks to publish original and review papers, full length and short papers, reports on software systems and their applications, case studies concerning network design, conference reports and news. Special issues on high profile hot-topics are requested. Contributions will be by open submission, co-ordinated via a conference or by invitation. The journal seeks to achieve rapid publication times, particularly for short papers. For more detailed information on the Objectives, Content and Subject Coverage, please see https://www.inderscience.com/browse/index.php and choose the journal title All papers are refereed through a double blind process. A guide for authors, sample copies and other relevant information for submitting papers are available at www.inderscience.com/papers Papers should be submitted to:

Editor In Chief Dr. Michael R. Bartolacci Associate Professor of Information Sciences and Technology The Pennsylvania State University - Berks Tulpehocken Road, P.O. Box 7009 Reading PA 19610-6009, USA Email: [email protected]

Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, 2009

1

Editorial Huaguang Zhang School of Information Science and Engineering, Northeastern University, Shenyang 110004, PR China E-mail: [email protected]

Shuzhi Sam Ge Department of Electrical and Computer Engineering, The National University of Singapore, Singapore 117576 E-mail: [email protected] Biographical notes: Huaguang Zhang, IEEE Senior Member, is a Full Professor in the College of Information Science and Engineering, Northeastern University, Shenyang, PR China. He received his BS and MS degrees from Northeastern Electric Power University, Jilin, PR China, in 1982 and 1985, and the PhD degree from Southeast University, Nanjing, PR China, in 1991, respectively. He has authored and coauthored over 300 journal and conference papers, four monographs and co-invented nine patents. He is currently an Associate Editor of both IEEE Transactions on Systems, Man, CyberneticsPart B and Neurocomputing. His main research interests are neural networksbased control, fuzzy control, chaos control, non-linear control, signal processing and their industrial applications. He was awarded the ‘Excellent Youth Science Foundation Award’, nominated by China Natural Science Foundation Committee, in 2003. He was named the Changjiang Scholar by China Education Ministry in 2005. Shuzhi Sam Ge, IEEE Fellow, is a Full Professor in the Department of Electrical and Computer Engineering, the National University of Singapore. He received his BS degree from Beijing University of Aeronautics and Astronautics (BUAA), Beijing, PR China, and PhD degree and the Diploma of Imperial College (DIC) from Imperial College of Science, Technology and Medicine, University of London, London, UK in 1986 and 1989, respectively. He has authored and co-authored three books Adaptive Neural Network Control of Robotic Manipulators (World Scientific, 1998), Stable Adaptive Neural Network Control (Kluwer, 2001) and Switched Linear Systems: Control and Design (Springer-Verlag, 2005) and over 300 international journal and conference papers. His current research interests are adaptive control, hybrid systems, sensor fusion, intelligent systems and system development.

The field of neural networks is now extremely vast and inter-disciplinary, drawing interests from researchers in many different areas such as engineering, mathematics, physics and computer science. Neural networks provide an intelligent approach for solving complex problems that might otherwise not have a tractable solution. Applications of neural networks include associative memory, function approximation, Copyright © 2009 Inderscience Enterprises Ltd.

2

H. Zhang and S.S. Ge

combinatorial optimisation and non-linear system modelling and control. Neural networks themselves are typically non-linear, and many different kinds of neural network models have already been proposed for solving different problems. Researches on dynamics of neural networks and neural networks-based control in non-linear control system still attract much attention from the academic community. Recently, rapid progress in the research community has generated many new results and powerful tools. The objective of this special issue is to present state-of-the-art results in dynamic modelling, control and applications and dynamics of neural networks. The idea is to consolidate the recent advances and to move on to the next level for future development. The call for papers for the special issue was well received, and a total of 23 papers were submitted. The papers were reviewed by experts in the area and eight of these papers are appearing in this issue. The topics have covered a broad range, from the dynamics of neural networks, modelling and control of a class of non-linear systems to such areas as algorithm researches for neural networks. The papers in the special issue present just a selection of interesting new work in neural network and we hope that they inspire yet more. We describe these papers in this issue briefly below. Two papers deal with the stabilisation/stability problems of neural networks. X. Lou and B. Cui: Stabilisation of cellular neural networks with time-varying delays and reaction-diffusion terms. This paper by X. Lou and B. Cui from PR China deals with the stabilization problem of a class of cellular neural networks with time-varying delays and reaction-diffusion terms. By designing a controller based on the feedback response and constructing proper Lyapunov functionals with respect to the space variables, the stabilisation conditions of delayed cellular neural networks with reaction-diffusion terms are derived. The results make a preparation for the research about stabilisation of delayed recurrent neural networks. Z. Wang, G. Chen and J. Feng: Global stability of a class of Cohen-Grossberg neural networks with delays. This paper by Z. Wang, G. Chen and J. Feng from PR China is concerned with the global asymptotic stability of a general class of Cohen-Grossberg neural networks with both multiple time varying and distributed delays. The characteristic of the method used in this paper is to suitably construct a Lyapunov functional dealing with the distributed delay instead of using the well-known Jensen inequality, which leads to a less conservative stability result. Some stabilisation problems of neural networks can be similarly dealt with using the present method. Three papers are in the algorithm improvements and implementations of neural network. X. Meng, J. Wang, Y. Pi and Q. Yuan: A novel Artificial Neural Network training method combined with Quantum Computational Multi-Agent System theory. In this paper by X. Meng, J. Wang, Y. Pi and Q.Yuan from PR China, a new Artificial Neural Network (ANN) model is constructed, which is based on multi-agent system theory and quantum computing algorithm. All nodes in this ANN are presented as quantum computational agents, and these quantum computational agents have learning ability. A novel ANN training algorithm is proposed via implementing quantum computational multi-agent system reinforcement learning. This ANN has powerful parallel-work ability and its training time is shorter than classical algorithm.

Editorial

3

S.P. Joy Vasantha Rani, P. Kanagasabapathy and L. Suganthi: Field Programmable Gate Array based floating point hardware design of recursive k-means clustering algorithm for Radial Basis Function neural network. This paper by S. P. Joy Vasantha Rani, P. Kanagasabapathy and L. Suganthi from India is concerned with the hardware design of Radial Basis Function neural network using the proposed k-means algorithm. Hardware implementation of this kinds of neural network will give much faster training than traditional processors and also relatively inexpensive. The design has been done with VHDL language, and tested and synthesized with the help of Virtex-II pro device. S.N. Pandey, S. Tapaswi and L. Srivastava: Online security monitoring andanalysis using Levenberg Marquardt algorithm-based neural network. In this paper by S. N. Pandey, S. Tapaswi and L. Srivastava from India, a LevenbergMarquardt (LM) algorithm-based feed-forward multi-layer perceptron neural network has been proposed, which uses the second order derivative for error reduction. The LM-based neural network provides a practical approach for implementing a pattern-mapping task. Since the proposed LM algorithm is extremely fast and accurate particularly during testing phase, this can be implemented for online security monitoring and analysis, without performing the contingency selection task. The effectiveness of the proposed LM-based approach for security monitoring and analysis has been demonstrated by computation of bus voltage magnitudes and voltage angles for line-outage contingencies at different loading conditions in IEEE 14-bus system. Three papers are devoted to the control problems of a class of non-linear systems. M. Hatti and M. Tioursi: Application of neural network approach for Proton Exchange Membrane fuel cell systems. This paper by M. Hatti and M. Tioursi from Algeria aims at proposing a neural network model for a fuel cell system based on proton exchange membrane technology by using a Quasi-Newton method and designing a neural controller using LevenbergMarquardt algorithm. By modelling the Proton Exchange Membrane (PEM) fuel cell system, a neural network controller is constructed to control the power under the assumption that any system of production is subjected permanently to load steps change variations. The PEM fuel cell neural network model is proposed using a Quasi-Newton method, and Levenberg-Marquardt training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. The QuasiNewton neural networks control is described, and results from the analysis as well as the limitations of the approach are presented. M. I. Solihin and Wahyudi: Sensorless anti-swing control of automatic gantry crane using dynamic recurrent neural network-based soft sensor. This paper by M. I. Solihin and Wahyudi from Malaysia proposes a sensorless antiswing control method for automatic gantry crane system because sensing the payload motion of a real gantry crane is troublesome and often costly. The soft sensor is designed based on dynamic recurrent neural network as a state estimator. Thus, a dynamic recurrent neural network is trained using input-output data to estimate payload swing angle from trolley acceleration and input voltage of trolley actuator. An experimental study using lab-scale automatic gantry crane is carried out to evaluate the effectiveness of the proposed sensorless anti-swing control. Y. Luo and Q. Wei: Robust adaptive tracking controller for non-affine non-linear systems with state time-varying delay and unknown dead-zone.

4

H. Zhang and S.S. Ge

This paper by Y. Luo and Q. Wei from P. R. China presents a novel neural networkbased dead-zone compensation scheme for a class of non-affine Multiple-Input MultipleOutput non-linear systems with state time-varying delay. A static neural network is introduced to approximate and adaptively cancel the unknown non-linearity and unknown dead-zone of the sub-systems. The control law and adaptive laws for the weights of the hidden layer and output layer of the Neural Network are established by guaranteeing the stability of the whole closed-loop system, and the tracking error is proved to be uniformly ultimately bounded.

Acknowledgements The Guest Editors would like to thank all those who have submitted papers to this special issue, and the associate editors and the many reviewers who were involved in the refereeing of the manuscripts.

Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, 2009

Stabilisation of Cellular Neural Networks with time-varying delays and reaction–diffusion terms Xuyang Lou College of Communication and Control Engineering, Jiangnan University, 1800 Lihu Road, Wuxi, Jiangsu 214122, China and CSIRO Division of Mathematical and Information Sciences, Waite Road, Urrbrae, SA 5064, Australia E-mail: [email protected]

Baotong Cui* College of Communication and Control Engineering, Jiangnan University, 1800 Lihu Road, Wuxi, Jiangsu 214122, China and Department of Electrical and Computer Engineering, National University of Singapore, Singapore 119260, Singapore E-mail: [email protected] *Corresponding author Abstract: This paper deals with the stabilisation problem of Cellular Neural Networks with time-varying delays and reaction–diffusion terms. By constructing proper Lyapunov functionals with respect to the space variables, the stabilisation conditions of delayed CNN with reaction–diffusion terms are derived. A feedback controller is designed to ensure the global asymptotical stability of the equilibrium point. The given algebra criteria are easy to verify which bring some convenience for those who design and verify these neural networks and the results make a preparation for the research about stabilisation of delayed neural networks and further the earlier researches. A numerical example illustrates the effectiveness of the results. Keywords: Cellular Neural Networks; CNN; Lyapunov functional; reaction– diffusion terms; stabilisation; time-varying delays. Reference to this paper should be made as follows: Lou, X. and Cui, B. (2009) ‘Stabilisation of Cellular Neural Networks with time-varying delays and reaction–diffusion terms’, Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, pp.5–21.

Copyright © 2009 Inderscience Enterprises Ltd.

5

6

X. Lou and B. Cui Biographical notes: Xuyang Lou received the BS degree from the Zhejiang Ocean University, China, in 2004. He is now pursuing the PhD in the College of Communication and Control Engineering, Jiangnan University, China and visiting CSIRO Division of Mathematical and Information Sciences, Waite Campus of the Adelaide University. His current research interests include nonlinear dynamical systems and mathematical analysis in neural networks. Baotong Cui was born in 1960. He received the PhD in Control Theory and Control Engineering from the College of Automation Science and Engineering, South China University of Technology, China in July 2003. He was a postdoctoral fellow at the Shanghai Jiaotong University, China, from July 2003 to September 2005 and a visiting scholar at Department of Electrical and Computer Engineering, National University of Singapore from August 2007 to February 2008. He became an Associate Professor in December 1993 and a Full Professor in November 1995 at Department of Mathematics, Binzhou University, Shandong, China. He joined the College of Communication and Control Engineering, Jiangnan University, China in June 2003, where he is a full professor for the College of Communication and Control Engineering. His current research interests include systems analysis, stability theory, impulsive control, artificial neural networks and chaos synchronisation.

1

Introduction

As we know, the stability of Cellular Neural Networks (CNNs) has far-reaching theory signification and application background (Chua and Yang, 1988a, b; Chua, 1998; Zhang and Wang, 2007a, b), especially in moving objects, speed detection of moving objects and pattern classification (Chua and Roska, 1990; Roska et al., 1990). The study of their effects on stability of CNNs has received much more and more attention. A number of criteria to achieve such a design have been proposed (see, for instance, (Cao and Zhou, 1999; Arik, 2000, 2002a, b; Cao, 2000, 2001; Liao and Wang, 2000; Zhou and Cao, 2002; Li, 2004; Singh, 2004; Lou and Cui, 2006a, 2007a, b) and the references cited therein). However, strictly speaking, diffusion effects cannot be avoided in the neural networks when electrons are moving in asymmetric electromagnetic fields. So, we must consider that the activations vary in space as well as in time. The authors have considered the stability of neural networks with diffusion terms in Liao, Fu and Gao (2000), Liang and Cao (2003), Liao, Yang and Cheng (2005), Song, Zhao and Li (2005) Cui and Lou (2006), Lou and Cui (2007a). Though there are many authors consider the stability of neural networks with diffusion terms, at the best knowledge of the authors, only few achievements have been made about the stabilisation of the neural networks with diffusion terms (Luo, 2004; Lou and Cui, 2006b). Many efforts have been devoted to search sufficient conditions for the problem of robust stabilisation of time-delay systems recently (Gao, Wang and Zhao, 2003; Gao et al., 2004; Kwon and Park, 2004; Ge, Hong and Lee, 2005). It is well known that the neural network controllers have been developed to compensate for the effects of non-linearities and system uncertainties via choosing network structures, training methods, and input data, so that the stability, error convergence and robustness of the control system can be greatly improved. Obviously, the recurrent neural network has

Stabilisation of Cellular Neural Networks

7

capabilities superior to the feedforward neural networks, such as feedback response and the information-storing ability. Thus, it is more important to develop some controllers based on the feedback response to ensure the stability of delayed neural networks. However, the stabilisation of CNNs with time-varying delays and reaction–diffusion terms has never been tackled. So, it is of great value to study the stabilisation of these systems. In this paper, we shall study the CNNs with reaction–diffusion terms for the stabilisation provided that the delays do not exceed certain (sufficiently small) bounds. Some stabilisation conditions for CNNs with time-varying delays and reaction–diffusion terms are developed by utilising Lyapunov functionals. The paper is organised as follows. In Section 2, the problem to be investigated is stated and some definitions and lemmas are listed. Based on the Lyapunov stability theory, stabilisation conditions for CNNs with time-varying delays and reaction–diffusion terms are obtained in Section 3. In Section 4, we give a numeric example to illustrate the results. Finally, some conclusions are drawn in Section 5. Notations: In the sequel, we denote AT the transpose of any square matrix A. We use A > 0 (A < 0) to denote a positive- (negative-) definite matrix A; and I is used to denote the n u n identity matrix. diag[.] denotes a block diagonal matrix. Rn and R mun denote, respectively, the n-dimensional Euclidean space and the set of all m u n real matrices.

2

System description and preliminaries

In this paper, we shall consider the following CNN with time-varying delays and reaction–diffusion terms:

wvi (t , x) wt

l

wvi (t , x) · w § ¨ Dik ¸ wxk ¹ k ©

¦ wx k 1

n

 ci vi (t , x) 

(1)

n

¦ w g (v (t , x))  ¦ h g (v (t  r (t ), x))  j ij

j 1

j

j

ij

j

j

j

i,

j 1

for i  {1, 2, …, n}, t > 0, where x = (x1, x2, …, xl)T  :  Rl, : is a bounded compact set with smooth boundary w: and mes: > 0 in space Rl; v = (v1, v2, …, vn)T  Rn u vi(t, x) is the state of the ith neurons at time t and in space x; gj denotes the signal function of jth neurons at time t and in space x; Ji denotes the external inputs on the ith neurons; Wj(t) is time-varying delay of the neural network satisfying 0 d W j (t ) d W * ,

0 d W j (t ) d V  1for j = 1,...,n, where W* and V are constants; ci > 0, wij, hij are constants, ci denotes the rate with which the ith neurons will reset their potential to the resting state in isolation when disconnected from the networks and external inputs; wij and hij denote the connection weights. Smooth functions Dik = Dik(t, x, v) t 0 corresponds to the transmission diffusion operators along the ith neurons. The boundary conditions and initial conditions are given by wvi § wvi wvi wv · : ¨ , ,..., i ¸ w n © wx1 wx2 wxl ¹

7

0, i 1, 2,! , n

(2)

8

X. Lou and B. Cui

and vi s, x Ii s, x , s  ª¬ W * , 0 º¼ , i 1, 2,! , n,

(3)

where Ii (s, x) (i = 1, 2, …, n) are bounded and continuous on [–W*,0] u :. We assume that the activation functions satisfy the following properties: Hypothesis (H): The neurons activation functions gj(˜) (i = 1, 2, …, n) are non-decreasing and Lipchitz-continuous, that is, there exist constants Lj > 0 such that 0d

g j ([ )  g j ([  ) [  [

d Lj , 

for all [1 z [2, [1, [2  R. For convenience, we v

*



T v1* , v2* ,! , vn*



introduce

some

notations.

Let

vi = vi(t, x).

Let

be the equilibrium of system (1).

Suppose (v1(t, x), v2(t, x), …, vn(t, x))T is any solution of the system (1), rewrite the system (1) as follows:



w vi t , x  vi*



l

¦

wt

k 1 n



·¸  c ¸¸ ¹

i

vi t, x  vi*



ij

ª g v t , x  g v* º j j »¼ «¬ j j

ij

ª g v t  W t , x  g v* º . j j j »¼ «¬ j j

¦w



j 1 n





§ w vi t , x  vi* ¨D ¨¨ ik wxk ©

w wxk

¦h j 1





(4)





Definition 1: For any continuous function V : R o R; Dini’s time-derivative of V(t) is defined as

D V t

lim sup

V t  h  V t h

h o0 

.

It is easy to see that if V(t) is locally Lipschitz, then °D+V (t)° < f.

3

Main results

Consider the following counterpart control system of system (4)



w vi t , x  vi* wt



l

¦ k 1

w wxk



§ w vi t , x  vi* ¨D ¨¨ ik wxk © n

 ki ui t , x 

¦w

ij

j 1

n



¦h

ij

j 1



·¸  c ¸¸ ¹

i

vi t, x  vi*

ª g v t , x  g v* º j j »¼ «¬ j j









ª g v t  W t , x  g v* º , i 1, 2,! , n. j j j »¼ ¬« j j

(5)

Stabilisation of Cellular Neural Networks

9

where ki > 0 is control coefficient and ui(t, x) is control function. We choose the following feedback control







 vi t , x  vi*  gi vi t , x  gi vi* .

ui t , x

(6)

So system (5) reduces to



w vi t , x  vi*



wt

l



§ w vi t , x  vi* w ¨ Dik wxk ¨ wxk 1 ©

¦ k



·¸  c  k v t, x  v ¸ i



n

 ¦ w

ij

j 1

¦h

ij

j 1



* i

i

¹

 ki gi vi t , x  gi vi* n

i









ª g j v j t , x  g j v*j º ¬ ¼

(7)



ª g v t  W t , x  g v* º , i 1, 2,! , n. j j j ¼ ¬ j j

Theorem 1: Under the Hypothesis (H), system (5) can be stabilised via the feedback control (6), if there exist ki > 0, di > 0 (i = 1, 2, …, n) such that

:

ª A « «W T « « T ¬H

W

H º » 1 D 0 »  0, P  » 1V » 0 D¼

W

wij nun , H hij nun, ,

A

diag 2c1  2k1 ,! , 2cn  2kn ,

D

diag d1 ,! , d n and

P

§ 2k 2k · diag ¨ 1 ,! , n ¸ . Ln ¹ © L1

(8)

where

Proof: We construct a Lyapunov functional t ª º n 2 « v t , x  v* 2  1 ds »dx dj g j v j s, x  g j v*j i i « » 1V j 1 i 1 « t W j (t ) »¼ ¬ (9) t º n ª n 2 «V (t , x)  1 ds » dx , dj g j v j s, x  g j v*j « » 1 V  i 1 « j 1 t W j (t ) ¬ ¼» n

V t

³¦

:



³¦

:

where V (t , x)

¦

* 2 . i

v t, x  v i

¦

³

³









10

X. Lou and B. Cui

By calculating the upper right Dini derivative D+V(t) of V(t) along the solutions of the Equation (7), we get n

ª§

wV (t ) wvi (t , x) · 1 ¸ w  V t 1 i ¹

³ ¦ ««¨© wv (t, x)

D V (t )

: i 1

¬

n



¦ d g v t  W j

j

j

j 1

n

r

³ ¦¦ : 2

i 1 k 1

j

j

j

* j

j

2

j 1

t , x  g j v*j

j



»» dx

¼ § w vi t , x  vi* w ¨ vi t , x  vi* Dik wxk ¨¨ wxk ©





n



n

¦ d g v t, x  g v

·¸ dx ¸¸ ¹

* 2 i

­ ³ ¦ ®¯2 c  k v t, x  v i

i

i

: i 1







2ki vi t , x  vi* g i vi t , x  gi vi* n

2

¦ w v t, x  v ª¬« g v t, x  g v º¼» ij

* i

i

j

j

* j

j

j 1 n

2

¦ h v t, x  v ª«¬ g v t  W ij

* i

i

j

j

t , x  g j v*j º»¼

j

j 1



1 1V n



¦d j 1

n

¦ d g v t, x  g v j

j

j

j

* j

2

j 1

§

j¨gj

©

v j t  W j t , x  g j v*j

(10)

2 ·½

¸ ¾ dx. ¹¿

From the assumption (H), we obtain that

v t , x  v g v t , x  g v t L1 g v t , x  g v * i

i

i

i

i

* i

i

i

i

From the boundary condition (2) and Green formula, we get n

l



w ³ ¦¦ v t, x  v wx * i

i

k

: i 1 k 1 n

¦³ i 1:



§ w vi t , x  vi* * ¨ vi t , x  vi ’ Dik ¨ wxk ©



§

n

¦ ³ ’ ¨¨ v t, x  v D * i

i

i 1: n



§ w vi t , x  vi* ¨D ¨ ik wxk ©



w vi t , x

ik

wxk

©

§

¦ ³ ¨¨ D

ik

i 1: ©



w vi t , x wxk

 vi*

·¸

 vi*

·¸

·¸ dx ¸ ¹

l

¸ ¹k

·¸

dx 1

l

¸ ¹k

dx 1

l

¸ ¹k





’ vi t , x  vi* dx 1

i

* i

2

.

(11)

Stabilisation of Cellular Neural Networks

¦³ i

n





§ w vi t , x  vi* ¨ v t , x  v* D i ik ¨ i wxk 1 w: ©

n





§ w vi t , x ¨ ¨ wxk ©

l

¦¦ ³ D

ik

i 1 k 1: n



l

l

dx

¸ ¹k

1

2

·¸ dx

(12)

¸ ¹



§ w vi t , x  vi* Dik ¨ ¨ wxk 1: ©

¦¦ ³ i 1 k

 vi*

·¸

11

2

·¸ dx ¸ ¹

T

§ w w w · , ,! , ¨ ¸ is the gradient operator, and w w w x x xl ¹ © 1 2

in which ’



·¸

§ w vi t , x  vi* ¨D ¨ ik wxk © Di 2



w vi t , x

 vi*

l



¸ ¹k

1

,! , D w v t , x i

il

wx2



§ w vi t , x  vi* ¨D , ¨ i1 wx1 ©  vi*



wxl

7



· ¸ . ¸ ¹

Substituting (11) and (12) into (10), it follows that n

D V t d

­

* 2 i

³ ¦ ®¯2 c  k v t, x  v : i

i

i



i 1 n

2

¦ w v t, x  v ª¬ g v ij

* i

i

j

j

j 1 n

2

2 ki gi vi t , x  gi vi* Li

* i

i

j

j

j

j 1



1 1 V

n

2

¦

¦ d g v t W j 1





j 1

j

(13)

j

j

j

t , x  g j v*j



¾ dx ¿

T

ª º ª º v t , x  v* v t , x  v* « » « » « g v t , x  g v* » : « g v t , x  g v* » « » « » « « * » * » «¬ g v t  W t , x  g v »¼ «¬ g v t  W t , x  g v »¼



2

t , x  g j v*j º¼

d j g j v j t , x  g j v*j

n





t , x  g j v*j º¼

¦ h v t, x  v ª¬ g v t  W ij









12

X. Lou and B. Cui

So D+V(t) < 0 when v(t, x) z v* and D+V (t) = 0 when v(t, x) = v*. Now, by a standard Lyapunov-type theorem in functional differential equations, (see, for example, Hale and Verduyn Lunel, 1993) the origin solution of Equation (5) is globally asymptotically stable, and therefore, the system (5) can be stabilised via the feedback control (6). Let’s consider the another new Lyapunov functional, and we can obtain the following theorem. Theorem 2: Under the Hypothesis (H), system (5) can be stabilised via the feedback control (6), if there exit ki > 0 (i = 1, 2, …, n) such that

:

ª A « T ¬W

Wº »  0, Q ¼

A

A1  A2 

(14)

where

A1

1 A3 , 1V diag 2c1  2k1 ,! , 2cn  2kn ,

A2

H a u La , La

A3

H b u Lb , Lb

W

wij nun Q

and D i  D ic 1, E ij  E ijc

h E diag L D ,! , L D , H h E 2 ij

diag L2nD n ,! , L2nD n , H a 2 nc 1

2 1c 1

ij

2 cji ji

b

nu n

nun

, ,

§ 2k 2k · diag ¨ 1 ,! , n ¸ Ln ¹ © L1 1 and i, j 1, 2,! , n.

Proof: Consider the following Lyapunov functional ª « (v (t , x )  v ) 2  1 i « i 1 V 1 « ¬

n

V (t )

³¦ i

n

ª 1 «V (t )  1V 1 « ¬

³¦ : i

where V t , x

* 2 . i

v t, x  v i

n

2D cj j

¦L j 1

n

¦

2D cj

Lj

hij

j 1

hij

2 E ijc

t

2 E ijc

º (v j ( s, x)  v*j )2 ds » dx » t W j (t ) ¼» t

³

º u (v j ( s, x)  v*j )2 » dx. » t W j (t ) ¼

³

(15)

Stabilisation of Cellular Neural Networks

13

By calculating the upper right Dini derivative D+V(t) of V(t) along the solutions of the system (7), we get n

ª§ wV t wvi t , x · 1 «¨¨ ¸¸  , 1 v t x t V w w  ¹ 1 ¬«© i

³¦ :

D V t

i

n



2 cj j

¦L D

hij

j 1

n

r

³ ¦¦ 2

i 1 k 1

:

2 Eijc

2 cj j

¦L D

hij

2 Eijc

j 1

v j t, x  v*j

2



v j t  W j t , x  v*j »» dx

¼ § w vi t , x  vi* w ¨ vi t , x  vi* Dik wxk ¨¨ wxk ©





n



n

·¸ dx ¸¸ ¹

* 2 i

­ ³ ¦ ®¯2 c  k v t, x  v i

i

i

: i 1







2ki vi t , x  vi* g j v j t , x  g j v*j n

2



¦ w v t, x  v ª¬« g v t, x  g v º¼» ij

* i

i

j

j

* j

j

j 1 n

2

¦ h v t, x  v ª«¬ g v t  W ij

* i

i

j

j

j 1



n



n

1 1V

2 cj j

¦L D

hij

j 1

2 cj j

¦L D

hij

2 Eijc

j 1

n

r

³ ¦¦ 2

d

2 Eijc

v j t, x  v*j

2

2

vi t , x  vi*



w wxk



§ w vi t , x  vi* ¨D ¨¨ ik wxk ©

n



t , x  g j v*j º»¼

v j t  W t , x  v*j ½¾¿ dx

i 1 k 1

:

j

·¸ dx ¸¸ ¹

* 2 i

­ ³ ¦ ®¯2 c  k v t, x  v i

i

i

: i 1







2ki vi t , x  vi* gi vi t , x  gi vi* n

2

¦ w v t, x  v g v t, x  g v ij

* i

i

j

j

j

* j

j 1 n

2

Dj

¦ ª«¬ L

j

hij

E ij

j 1



1 1V

n

n

hij

j 1

2 cj j

¦ L D j 1

2 cj j

¦L D hij

Dc

vi t , x  vi* º» ª« L j j hij ¼¬

2 Eijc

2 Eijc

v j t, x  v*j

Eijc

v j t  W j t , x  v*j º» ¼





2

2

v j t  W j t , x  v*j ½¾¿ dx.

(16)

14

X. Lou and B. Cui

1 Then we use the inequality ab d a 2  b 2 to estimate the following equation, we have 2 that n

2

¦ ª«¬ L

Dj

hij

j

Eij

j 1

n

d

2D j j

¦L

hij

2 E ij

n

2D cj j

¦L

v t, x  v * i

i

j 1



Dc vi t , x  vi* º ª L j j hij »¼ «¬

hij

2 E ijc

v t  W i

j 1

j

E ijc

v j t  W j t , x  v*j º »¼





2

(17)

t , x  vi*

2

.

Applying (11–12) and (17) to (16), it follows that n

D V t d

³ ¦^2 c  k v t, x  v i

i

* 2 i

i



: i 1 n

¦ w v t, x  v ª¬ g v

2

ij

* i

i

j

j

j 1

n



2D j 2 Eij j hij

¦L j 1

* 2 i

v t, x  v i

*



2ki gi vi t , x  gi vi* Li





2

t , x  g j v*j º¼ 1 1V

n

2 ic 2 cji j ij

(18) * 2 i

E ¦ L D h v t, x  v i

j 1

T

ª º ª º v t, x  v v t , x  v* « » :« ». * * « g v t, x  g v » « g v t, x  g v » ¬ ¼ ¬ ¼





So D+V(t) < 0 when v(t, x) z v* and D+V(t) = 0 when v(t, x) = v*. Now, by a standard Lyapunov-type theorem in functional differential equations (see, for example, Hale and Verduyn Lunel, 1993) the origin solution of Equation (5) is globally asymptotically stable, and therefore, the system (5) can be stabilised via the feedback control (6). Corollary 1: Under the Hypothesis (H), system (5) can be stabilised via the feedback control (6), if there exit ki > 0 (i = 1, 2, …, n) such that

:

ª A « T ¬W

Wº »  0, Q ¼

A

A1  A2 

where

A1

1 A3 , 1V diag 2c1  2k1 ,! , 2cn  2kn ,

A2

H a u L, H a

A3

H b u L, H b

W

wij nun , Q

hij nun , hij nun and § 2k 2k · diag ¨ 1 ,! , n ¸ . L Ln ¹ © 1

(19)

Stabilisation of Cellular Neural Networks

15

Corollary 2: Under the Hypothesis (H), system (5) can be stabilised via the feedback control (6), if there exit ki > 0 (I = 1, 2, …, n) such that

:

ª A « T ¬W

Wº »  0, Q ¼

A

A1  A2 

(20)

where

A1

1 A3 , 1 V diag 2c1  2k1 ,! , 2cn  2kn ,

A2

L, A3

W

wij nun , Q

H b u L, H b

h 2 ji

nu n

and

§ 2k 2k · diag ¨ 1 ,! , n ¸ . Ln ¹ © L1

Remark 1: The assumption of reaction–diffusion terms in this paper is almost the same in Liao, Fu and Gao (2000), Liang and Cao (2003), Luo (2004), Liao, Yang and Cheng (2005), Song, Zhao and Li (2005) and Cui and Lou (2006). We can see its applied meaning in Liao, Fu and Gao (2000), Liang and Cao (2003), Luo (2004), Liao, Yang and Cheng (2005), Song, Zhao and Li (2005) and Cui and Lou (2006). Remark 2: Recently, in Cao and Zhou (1999), Zhou and Cao (2002), Lou and Cui (2006a, 2007a) and Zhang and Wang (2007a, b), some stability conditions were derived for the delayed CNNs. In those papers, the activation functions gj(˜) were assumed to be nondecreasing and bounded. It is known that these assumptions in Cao and Zhou (1999), Zhou and Cao (2002), Lou and Cui (2006a, 2007a) and Zhang and Wang (2007a, b) are very restrictive and limit their applications. Nevertheless, the above restrictions on the activation functions are removed in our papers. We only require that the activation functions to be Lipschitz continuous (H). Hence, the results obtained in this paper are less conservative and less restrictive than the previous research.

4

An illustrative example

To demonstrate the validity of the exponential stabilisation condition, an example is given in this section. Example 1: A two-dimensional neural network with time-varying delays is described by the following equation:

wvi t , x wt

l

wvi t , x · w § ¨ Dik ¸  ci vi (t , x)  wxk ¹ k ©

¦ wx k 1 n



¦ h g v t  W ij

j 1

j

j

j

t , x ,

n

¦ w g v ij

j 1

j

j

t, x (21)

16

X. Lou and B. Cui

for i = 1, 2 and k = 1, where C

ª1 0 º «0 1 » , W ¬ ¼

ª1.5 1.3º «1.4 1.2» and H ¬ ¼

ª 0.5 0.3º «0.4 0.2 » .  ¬ ¼

Let Dik = 1, i = 1, 2, k = 1. The delays W1(t) = W2(t) = et/(1 + et) are time-varying and satisfy 0 d W j t d 1 W *j , 0 d W j t d 0.5, j 1, 2. And the activation function is described by a Piecewise linear (PWL) function gj([) = 0.5(|[ + 1| – |[–1|)(j = 1, 2) and V = 0.5. Clearly, gj([) satisfies the condition (H) above, with L1 = L2 = 1. The dynamical behaviours of two neural states v1 and v2 without control law are shown in Figures 1 and 2 with the initial v1 (0, x) = 1, v2 (0, x) = 0 and the boundary conditions ˜v1/˜x (t, 0) = 0, v1 (t, 1) = 1 and ˜v2/˜x (t, 1) = 0, v2 (t, 0) = 0. And we can see that the neural states do not converge towards an equilibrium. Considering the counterpart control system (5) of system (21), the feedback controller is designed as follows: u t, x

v t , x  g v t , x .

(22)

To achieve stabilisation, the controller gain matrix K = diag(k1, k2, …, kn) is designed to be K

ª0.8 0 º « 0 0.7 » . ¬ ¼

(23)

So, by solving the LMI (8) using the Matlab Toolbox in Theorem 1, a feasible solution is 0 º ª 0.1514 D « ! 0. 0.1514 »¼ ¬ 0 Figure 1

Dynamical behaviour simulation of neural state v1 without control law (see online version for colours)

Stabilisation of Cellular Neural Networks Figure 2

17

Dynamical behaviour simulation of neural state v2 without control law (see online version for colours)

So, it follows from Theorem 1 that the neural network (21) can be stabilised via the feedback control (22) with the controller gain matrix (23). The dynamical behaviours are shown in Figures 3 and 4 with the initial v1(0, x) = 1, v2(0, x) = 0 and the boundary conditions ˜v1/˜x (t, 0) = 0, v1 (t, 1) = 1 and ˜v2/˜x (t, 1) = 0, v2 (t, 0) = 0. And the space surface plots are shown in Figures 5 and 6. As we can see that the neural network’s states converge towards equilibriums. Figure 3

Dynamical behaviour simulation of neural state v1 under control (see online version for colours)

18

X. Lou and B. Cui

Figure 4

Dynamical behaviour simulation of neural state v2 under control (see online version for colours)

Figure 5

Space surface plot of t – x – v1 (see online version for colours)

Stabilisation of Cellular Neural Networks Figure 6

19

Space surface plot of t – x – v2 (see online version for colours)

5

Conclusions

1

Based on suitable Lyapunov functionals and analytic technique, several sufficient conditions for the stabilisation of CNNs with time-varying delays and reaction– diffusion terms are obtained. These criteria may play an important role in the design and applications for the stabilisation of neural networks with reaction–diffusion terms.

2

Some famous neural networks models became a special case of system (1). For example, when Dik(t, x, v) = 0 (i = 1, 2, …, n and k = 1, 2, …, l); system (1) is generic Cellular Neural Networks, which has been studied in Cao and Zhou (1999), Arik (2000, 2002a, b), Cao (2000, 2001), Liao and Wang (2000), Zhou and Cao (2002) Singh (2004), Zhang and Wang (2007a, b).

3

From theorems, we conclude if reaction–diffusion terms satisfy weaker conditions, the main effect for the stabilisation of the neural networks just come from the network parameters. The given algebra criteria are easy to verify, it will bring some convenience for those who design and verify these neural networks.

Acknowledgements This work is supported by the National Natural Science Foundation of China (No. 60674026), the Key Research Foundation of Science and Technology of the Ministry of Education of China (No. 107058), the Provincial Natural Science Foundation of Jiangsu (No. BK2007016), the Jiangsu Provincial Program for Postgraduate Scientific Innovative Research of the Jiangnan University (No. CX07B_116z) and PIRTJiangnan.

20

X. Lou and B. Cui

References Arik, S. (2000) ‘On the global asymptotic stability of delayed cellular neural networks’, IEEE Transactions on Circuits Systems, Vol. 47, pp.571–574. Arik, S. (2002a) ‘An analysis of global asymptotic stability of delayed Cellular Neural Networks’, IEEE Transactions on Neural Networks, Vol. 13, pp.1239–1242. Arik, S. (2002b) ‘An improved global stability result for delayed Cellular Neural Networks’, IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, Vol. 49, pp.1211–1214. Cao, J. (2000) ‘Global exponential stability and existence of periodic solutions of delayed CNNs’, Science in China (Series E), Vol. 30, pp.541–549. Cao, J. (2001) ‘Global stability conditions for delayed CNNs’, IEEE Transactions on Circuits Systems, Vol. 48, pp.1330–1333. Cao, J.D. and Zhou, D.M. (1999) ‘Global stability analysis of delayed Cellular Neural Networks’, Journal of Biomathematics, Vol. 14, pp.65–71 (in Chinese). Chua, L.O. (1998) CNN: A Paradigm for Complexity. Singapore: World Scientific. Chua, L.O. and Roska, T. (1990) ‘Cellular neural networks with nonlinear and delay-type template elements’, Paper presented in the Proceedings of the IEEE International Workshop on Cellular Neural Networks Applications, pp.12–25. Chua, L.O. and Yang, L. (1988a) ‘Cellular neural networks: theory’, IEEE Transactions on Circuits Systems, Vol. 35, pp.1257–1272. Chua, L.O. and Yang, L. (1988b) ‘Cellular neural networks: applications’, IEEE Transactions on Circuits Systems, Vol. 35, pp.1273–1290. Cui, B.T. and Lou, X.Y. (2006) ‘Global asymptotic stability of BAM neural networks with distributed delays and reaction–diffusion terms’, Chaos, Solitons and Fractals, Vol. 27, pp.1347–1354. Gao, H.J., Wang, C.H. and Zhao, L. (2003) ‘Comments on an LMI-based approach for robust stabilization of uncertain stochastic systems with time-varying delays’, IEEE Transactions on Automatic Control, Vol. 48, pp.2073–2074. Gao, H.J., Lam, J., Wang, C.H. and Wang, Y.F. (2004) ‘Delay-dependent output-feedback stabilization of discrete-time systems with time-varying state delay’, IEE Proceedings-Control Theory and Applications, Vol. 151, pp.691–698. Ge, S.S., Hong, F. and Lee, T.H. (2005) ‘Robust adaptive control of nonlinear systems with unknown time delays’, Automatica, Vol. 41, pp.1181–1190. Hale, J. and Verduyn Lunel, S.M. (1993) Introduction to Functional Differential Equations. New York, NY: Springer. Kwon, O.M. and Park, J.H. (2004) ‘On improved delay-dependent robust control for uncertain time-delay systems’, IEEE Transactions on Automatic Control, Vol. 49, pp.1991–1995. Li, Y.K. (2004) ‘Global robust stability of interval cellular neural networks with time-varying delays’, Physics Letters A, Vol. 333, pp.51–61. Liang, J. and Cao, J. (2003) ‘Global exponential stability of reaction–diffusion recurrent neural networks with time-varying delays’, Physics Letters A, Vol. 314, pp.434–442. Liao, T.L. and Wang, F.C. (2000) ‘Global stability for cellular neural networks with time delay’, IEEE Transactions on Neural Networks, Vol. 11, pp.1481–1484. Liao, X.X., Fu, Y.L. and Gao, J. (2000) ‘Stability of hopfield neural networks with reaction– diffusion terms’, Acta Electronica Sinica, Vol. 28, pp.78–82. Liao, X.X., Yang, S.Z. and Cheng, S.J. (2005) ‘Stability of general neural networks with reaction– diffusion’, Science in China, Vol. 335, pp.213–225. Luo, Q. (2004) ‘Stabilization of stochastic hopfield neural networks with reaction–diffusion’, Science in China, Vol. 34, pp.619–628.

Stabilisation of Cellular Neural Networks

21

Lou, X.Y. and Cui, B.T. (2006a) ‘New LMI conditions for delay-dependent asymptotic stability of delayed cellular neural networks’, Neurocomputing, Vol. 69, pp.2374–2378. Lou, X.Y. and Cui, B.T. (2006b) ‘Robust exponential stabilization of a class of delayed neural networks with reaction–diffusion terms’, Int. J. Neural Systems, Vol. 16, pp.435–443. Lou, X.Y. and Cui, B.T. (2007a) ‘Boundedness and exponential stability for nonautonomous cellular neural networks with reaction–diffusion terms’, Chaos, Solitons and Fractals, Vol. 33, pp.653–662. Lou, X.Y. and Cui, B.T. (2007b) ‘Boundedness and exponential stability for nonautonomous RCNNs with distributed delays’, Computers and Mathematics with Applications, Vol. 54, pp.589–598. Roska, T., Boros, T., Thiran, P. and Chua, L.O. (1990) ‘Detecting simple motion using cellular neural networks’, Paper presented in the Proceedings of the IEEE International Workshop on Cellular Neural Networks Applications, pp.127–138. Singh, V. (2004) ‘A generalized LMI-based approach to the global asymptotic stability of delayed cellular neural networks’, IEEE Transactions on Neural Networks, Vol. 15, pp.223–225. Song, Q.K., Zhao, Z.J. and Li, Y.M. (2005) ‘Global exponential stability of BAM with distributed delays and reaction–diffusion terms’, Physics Letters A, Vol. 335, pp.213–225. Zhang, H. and Wang, G. (2007a) ‘New criteria of global exponential stability for a class of generalized neural networks with time-varying delays’, Neurocomputing, Vol. 70, pp.2486–2494. Zhang, H. and Wang, Z. (2007b) ‘Global asymptotic stability of delayed cellular neural networks’, IEEE Transactions on Neural Networks, Vol. 18, pp.947–950. Zhou, D.M. and Cao, J.D. (2002) ‘Globally exponential stability conditions for cellular neural networks with time-varying delays’, Applied Mathematics and Computation, Vol. 131, pp.487–496.

22

Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, 2009

Global stability of a class of Cohen–Grossberg neural networks with delays Zhanshan Wang* and Jian Feng School of Information Science and Engineering, Northeastern University, Shenyang, Liaoning 110004, People’s Republic of China E-mail: [email protected] E-mail: [email protected] *Corresponding author

Gang Chen Institute of Science, Shenyang Ligong University, Shenyang, Liaoning 110168, People’s Republic of China E-mail: [email protected] Abstract: This paper is concerned with the Global Asymptotic Stability (GAS) of a general class of Cohen–Grossberg neural networks with both multiple time varying delays and distributed delays. Criteria are established to ensure the GAS of the concerned neural networks, which can be expressed in the form of Linear Matrix Inequality and independent of amplification functions. Furthermore, a sufficient condition guaranteeing the global robust stability is established for the general class of Cohen–Grossberg neural networks with both multiple time varying delays and distributed delays in the case of parameter uncertainties. Keywords: Cohen–Grossberg neural networks; distributed delays; Global Asymptotic Stability; GAS; Linear Matrix Inequality; LMI; multiple time varying delays; robust stability. Reference to this paper should be made as follows: Wang, Z., Feng, J. and Chen, G. (2009) ‘Global stability of a class of Cohen–Grossberg neural networks with delays’, Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, pp.22–49. Biographical notes: Zhanshan Wang was born in Liaoning Province, People’s Republic of China, in November 1971. He received BEng in Electrical Automation, MSc and PhD both in Control Theory and Control Engineering degrees in 1994, 2001 and 2006, respectively. Currently, he is an Associate Professor in the School of Information Science and Engineering of the Northeastern University, China. His research fields include non-linear control, fault diagnosis and fault tolerant control, stability analysis of non-linear system. He has published more than 30 papers in journals and conferences.

Copyright © 2009 Inderscience Enterprises Ltd.

Global stability of a class of Cohen–Grossberg neural networks with delays 23 Jian Feng was born in Liaoning Province, People’s Republic of China, in June 1971. He received BEng in Electrical Automation, MSc and PhD both in Control Theory and Control Engineering degrees in 1993, 1996 and 2005, respectively. Currently, he is a Associate Professor in the School of Information Science and Engineering of the Northeastern University, China. His research fields include fuzzy control, fault diagnosis and fault tolerant control, stability analysis of nonlinear system. He has published more than 30 papers in international journals and conferences. Gang Chen was born in Inner Mongolia Autonomous region, People’s Republic of China, in April 1968. He received BSc in Physical Education in the Inner Mongolia University for Nationalities in 1990. Currently, he is a Lecturer in Institute of Science of Shenyang Ligong University, China. His research fields include stability analysis and control of neural networks, image processing and so on.

1

Introduction

Cohen–Grossberg (1983) proposed a neural network model in 1983 described by the following system of equations: ui (t )

ª ai (ui (t )) «ci (ui (t ))  «¬

n

º

¦ w g (u (t ))»» , ij

j 1

j

j

(1)

¼

where ai(ui(t)) is a positive and bounded amplification function, ci(ui(t))is a welldefined function to guarantee the existence of the solution of system (1), gj(uj(t))is an activation function describing the effects of input on the output of neuron, wijis the connection weight coefficient of the neural network, i, j = 1, …, n. System (1) includes a number of models from neurobiology, population biology and evolution theory, as well as the Hopfield neural network model (Hopfield, 1984) as a special case. In electronic implementation of analogoue neural networks, delays always exist due to the transmission of signal and the finite switching speed of amplifiers Marcus and Westervelt, (1989). On the other hand, it is desirable to introduce delays into neural networks when dealing with problems associated with motions (Roska et al., 1992; Roska, Wu and Chua, 1993; Liao, Wu and Yu, 2002). Therefore, model (1) and its delayed version have attracted the attention of many researchers and have been extensively investigated due to its wide applications in various fields (Chen and Rong, 2003; Chen and Rong, 2004; Chen, 2006; Guo and Huang, 2006; Liao, Li and Wong, 2004; Lu, 2005; Lu, Shen, and Chung, 2005; Lu and Chen, 2003; Wang and Zou, 2002 a, b; Xiong and Cao, 2005; Ye, Michel and Wang, 1995; Zhang and Ji, 2005). Among them, references (Ye, Michel and Wang, 1995; Zhang and Ji, 2005) introduced constant delays into Equation (1), which yields the following form, ui (t )

N ª ai (ui (t )) «ci (ui (t ))  «¬ k 0

n

º wijk g j (u j (t  W k )) » , i 1, ! , n, »¼ 1

¦¦ j

(2)

24

Z. Wang, J. Feng and G. Chen

where Tk t 0are bounded constant delays, wijk are the connection weight coefficients and other notations are the same as those in system (1), k = 0, …, N, i, j = 1, …, n. Especially, a novel delay-independent stability criterion was established for model (2) on the basis of Linear Matrix Inquality (LMI) in Zhang and Ji (2005). The result in Zhang and Ji (2005) has the following advantages. It overcame the shortcomings existed in Ye, Michel and Wang (1995) and improved the result in Ye, Michel and Wang (1995), i.e. the result in Zhang and Ji (2005) considered the signs of the entries of the interconnecting matrix. On the other hand, when dealing with the case of time-varying delay, especially the change rate and the size of time delay are unknown, the superiority of the result in Zhang and Ji (2005) is straightforward. We remark that one can consider several types of delays in Equation (2) (Gopalsamy and He, 1994; Niculosu, 2001; Richard, 2003). References (Gopalsamy and He, 1994; Gopalsamy and He, 1994) first investigated the stability problem of asymmetric Hopfield neural networks with continuously distributed delays. The characteristic of this kind of continuously distributed delays is that the delays range over the infinitely long duration. Dynamics of different kinds of neural networks with this kind of continuously distributed delays are widely studied (Chen, 2002; Chen and Zheng, 2006; Liu and Han, 2006; Wang, 2005). Although the results of Chen (2002), Chen and Zheng (2006), Liu and Han (2006), and Wang (2005) are generally easy to verify, all the results in Chen and Zheng (2006), Chen (2002), Liu and Han (2006), and Wang (2005) take the absolute value operation on the connection weight coefficients. Therefore, the sign difference of entries in connection matrix is ignored, which leads to the ignorance of the neuron’s excitatory and inhibitory effects on the neural network. However, some kinds of continuously distributed delays in a practical system often range over a finite duration. For example, one application of this kind of continuously distributed delay systems can be found in the modelling of feeding systems and combustion chambers in a liquid monopropellant rocket motor with pressure feeding (Crocco, 1951; Fiagbedzi and Pearson, 1987). This kind of continuously distributed delay systems have been investigated in Kolmanovskii and Richard (1999), Xie, Fridman and Shaked (2001), Lam, Gao and Wang (2005), and Xu et al. (2005) but all the results in (Kolmanovskii and Richard, 1999; Xie, Fridman and Shaked, 2001; Lam, Gao and Wang, 2005; Xu et al., 2005) are only suitable for the case of linear systems with constant delay. It is well known that neural networks is a largescale and complex non-linear dynamic system. Different types of delays may occur, and it is useful to investigate the dynamics of neural networks with different types of delays. Motivated by the above discussions, the purpose of this paper is to establish a sufficient condition for the global asymptotic stability of the following system with both multiple time varying delays and a kind of continuously distributed delays ranging over a finite duration, n N n ª º wij g j (u j (t ))  wijk g j (u j (t  W kj (t ))) » « ci (ui (t ))  « » j 1 k 1 j 1 », ai (ui (t )) « t r n « » l  bij g j (u j ( s))ds  U i « » «¬ »¼ l 1 j 1 t dl

¦

ui (t )

¦¦ ³

¦¦

(3)

Global stability of a class of Cohen–Grossberg neural networks with delays 25 where wijk and bijl are connection weight coefficients of the neural network, time delays

IJkj‚(t) > 0 and dl > 0are all bounded, Ui is the external constant input, and other notations are the same as those in system (1), i, j = 1, …, n, k = 1, …, N and l = 1, …, r. Note that the continuously distributed delay mentioned in the following all refer to the continuously distributed delays ranging over a finite duration, which has the form as that in Equation (3). We also note that system (3) can be simplified to the popular Hopfield model Hopfield (1984) when ai(ui(t)) Ł‚1, wijk 0, bijl 0 and ci(ui(t))is assumed to be ciui(t)for ci > 0. When ai(ui(t)) Ł 1, bijl

0, N = n and ci(ui(t)) = ciui(t) for

ci > 0, sufficient criteria for the global stability have been obtained in Chen (2001), Huang, Ho and Cao (2005), Liao and Wang (2003), Zeng, Wang and Liao (2003), and Zhang and Wang (2007). When bijl 0, wij = 0and N = n, sufficient conditions for global asymptotic stability are derived in Guo and Huang (2006). When bijl

0 and

Tkg(t) = Tk, stability criteria for global asymptotic stability are derived in Chen (2006), Ji, Zhang and Guan (2007), Ye, Michel and Wang (1995), Zhang and Ji (2005), and Zhang, Ji and l Zhang (2006). When bij 0, N = n, stability results are derived in Chen and Rong (2004), Lu (2005) and Lu, Shen and Chung (2005). The neural network model studied here is more general than those studied in Chen (2001), Chen and Rong (2004), Chen (2006), Guo and Huang (2006), Hopfield (1984), Huang, Ho and Cao (2005), Liao and Wang (2003), Lu (2005), Lu, Shen and Chung (2005), Ye, Michel and Wang (1995), and Zeng, Wang and Liao (2003). In this paper, two sufficient conditions are derived to ensure the global asymptotic stability of system (3) via linear matrix inequality technique, which are independent of the size of time varying delays and amplification functions. Corollaries are also given for some special cases of system (3). Moreover, a global robust stability criterion is also established for system (3) with parameter uncertainties. All the obtained results are easy to verify and consider neuron’s excitatory and inhibitory effects on the neural network. The rest of the paper is organised as follows. In Section 2, we provide some notations, assumptions and lemmas, which will be used later. In Section 3, we will establish two sufficient conditions for global asymptotic stability of system (3) based on LMI technique. Some corollaries are also derived from the main results. In Section 4, a sufficient condition ensuring the global robust stability is established for system (3) with parameter uncertainties. Two numerical examples are used to show the effectiveness of the results obtained in Section 5 and conclusions are made in Section 6.

2

Preliminaries

Throughout the paper, let BT, B–1, Ȝ m (B), Ȝ M (B) and B

O M ( BT B) denote the

transpose, the inverse, the smallest eigenvalue, the largest eigenvalue and the Euclidean form of a square matrix B. Let B > 0 (B < 0) denote a positive (negative) definite symmetric delays Wij (t) are bounded, 0 ” Wij (t) ” U, i = 1, …, N and j = 1, …, n.

26

Z. Wang, J. Feng and G. Chen

Assumption 2.1. There exist constants Ji > 0 such that the function ci(˜) satisfies ci ]  ci [

] [

(4)

t Ji,

for  ], [  ƒ, ]z [ and i = 1, }, n.

Assumption 2.2. The bounded activation function gi(˜) satisfies the following condition 0d

gi ]  gi [

] [

(5)

d Gi ,

for  ], [  ƒ, ]z [ and for GI > 0, i =1, }, n Let * = diag(J1, }, Jn) and ' = diag(G1, }, Gn). Obviously, positive diagonal matrices * and ' are non-singular.

Assumption 2.3. ai(ui(t)) is a positive and bounded amplification function, i = 1, }, n. Lemma 2.1. Let X and Y be two real vectors with appropriate dimensions, and – and Q be two matrices with appropriate dimensions, where Q > 0. Then for any two positive constants m and l, the following inequality holds, 1

 mX T QX  2lX T 3 Y d l 2Y T 3 T mQ 3 Y .

(6)

This lemma can be proved as follows:  mX T QX  2lX T 3 Y T

1 1 1 1 ª º ª º    « mQ 2 X  mQ 2 l 3 Y » « mQ 2 X  mQ 2 l 3 Y » ¬ ¼ ¬ ¼ 1

 l 2Y T 3 T mQ 3Y 1

d l 2Y T 3 T mQ 3Y .

According to Wang and Zou (2002), for every external constant input Ui, neural networks (3) has an equilibrium point u * above conditions, i = 1, }, n. Let xi (t ) the following form, xi (t )

ª  Ai xi t «Ci xi t  «¬ N



n

¦¦ w f x t  IJ k ij

k 1 j 1

or in a vector-matrix form

j

j

T

ªu1* ,! , un* º if ai(˜), ci(˜) and gi(˜) satisfy the ¬ ¼ ui (t )  ui* , then model (3) is transformed into

n

¦ w f x t ij i

j

j 1

r

kj

n

t  ¦¦ bijl l 1 j 1

(7)

t

³ f j

t  dl

º x j s ds » , » ¼



Global stability of a class of Cohen–Grossberg neural networks with delays 27 N ª  A x t «C x t  Wf x t  Wk f x t  W k t «¬ k 1 t r º Bl f x s ds » ,  » l 1 t  dl ¼



¦

x t

(8)

¦ ³

x(t) = (x1(t), }, xn(t))T, A(x(t)) = diag(A1(x1(t)), }, An(xn(t))), Ai(xi(t) = Where T * * ai(xi(t) + ui ), f(x(t)) = (f1(x1(t)), }, fn(xn(t))) , fi(xi(t)) = gi(xi(t)+ ui ) – gi( ui* ),

C(x(t)) = (C1(x1(t)), }, Cn(xn(t)))T, Ci(xi(t)) = ci(xi(t) + ui* ) – ci( ui* ), W = (wij)nun, f(x(t–

W k (t))) = (f1(x1(t–Wk1(t))), }, fn(xn(t – Wkn(t))))T,

W k(t) = (Wk1(t), }, Wkn(t))T, Bl = ( bijl )nun,

Wk = ( wijk )nun, i, j = 1, }, n, k = 1, }, n, k = 1, }, N, l = 1, }, r. The initial conditions of Equation (8) are of the x(T) = M (T) for – U d T d 0, and its supremum bound is ||M || = sup ||M (T) ||. By Assumptions 2.1 and 2.2, we can easily see that  U dT dT

Ci(xi(t))/xi(t) t Ji and fi(xi(t))/xi(t) d Gi for any xi(t) z 0, i=1, }, n.

3

Global asymptotic stability results

We now state and prove our first result of the paper.

Theorem 3.1. Suppose that Wij (t ) d Pij  1. If there exist positive diagonal matrices P = diag(p1,},pn), D = diag (d1 ,!, dn ) and Qi = diag (qi1,},qin), positive definite symmetric matrices Hl > 0 and Yl > 0, such that Hl > Yl and the following conditions hold, N

:

§1

¦ ¨© K i 1

i

· DWi Qi1WiT D  Qi ¸  2 DJ ǻ 1 ¹

 DW  W T D 

r

¦d

2 l Hl

l 1

r

 2 P* 

¦ PB H l

1 T l Bl P

r



¦ DB Y

l l

1 T Bl D

(9)  0,

l 1

 0,

(10)

l 1

then the equilibrium point of model (3) is globally asymptotically stable, independent of amplification functions and the size of time varying delays, where Ki = min(1–Pij), j = 1,}, n, l = 1,}, r and i = 1,}, N. Proof. Consider the following Lyapunov–Krasovskii functional, V ( x(t )) V1 ( x(t ))  V2 ( x(t ))  V3 ( x(t )),

(11)

28

Z. Wang, J. Feng and G. Chen

where r

V1 ( x(t )) D

§ t ªt º ªt º · ¨ « f T x T dT » H l « f x T dT » ds ¸, ¨ »¼ «¬ s »¼ ¸ 1 © t  dl « ¬s ¹

¦ ³ ³ l

³

§ dl t · V2 ( x(t )) D ¨ T  t  s f T x T H l f x T dT ds ¸, ¨ ¸ l 1 © 0 ts ¹ r

¦ ³³

N

V3 ( x(t ))

¦

t

n

(D  Ei )

i 1

¦ ³ W j 1 t

ij ( t )

xi (t )

xi (t ) n

n

 2( N  1)

qij f j2 ( x j ( s))ds

¦ p ³ ¦d ³ i

i 1

i

0

i 1

0

fi ( s) ds, Ai ( s )

D > 0 and Ei > 0 are to be defined later, i = 1,}, N. Now, we will give a useful formula to be used in the proof. For a well-defined function F ( y)

³

x2 ( y )

x1 ( y )

F c( y )

f ( x, y )dx, we have d dy

x2 ( y )

³

f ( x, y )dx

x1 ( y )

x2 ( y )

³

x1 ( y )

(12)

dx ( y ) dx ( y ) wf ( x, y ) dx  f x2 y , y 2  f ( x1 ( y ), y ) 1 . dy dy wy

The proof can be done as follows. Let G ( y, x2 , x1 ) {

³

x2

x1

f ( x, y )dx, x1

x1 ( y ), and x2 = x2(y).

Then, F(y) can be expressed as a compound function with the form F(y) = G(y, x2(t), x1(t)). By the chain rule of compound function and the derivative formula of variable-upper-limit integral, it yields, F c( y )

wG wG dx2 wG dx1   wy wx2 dy wx1 dy x2 ( y )

³

x1 ( y )

dx dx wf ( x, y ) dx  f x2 y , y 2  f ( x1 ( y ), y ) 1 . dy dy wy

By formula (12), the derivative of V1(x(t)) along the trajectories of Equation (8) is as follows,

Global stability of a class of Cohen–Grossberg neural networks with delays 29 ª § t · § t · « ¨ f T ( x(T ))dT ¸ H l ¨ f ( x(T ))dT ¸ « ¨ ¸ ¨ ¸ 1 « © t  dl ¹ © t  dl ¹ ¬

r

¦

V1 ( x(t )) D

l

³

³

t

º §t · ¨ f T ( x(T ))dT ¸ H l f ( x(t ))ds » ¨ ¸ » t  dl © s ¹ ¼

2

³ ³

ª § t · § t · « ¨ f T ( x(T ))dT ¸ H l ¨ f ( x(T ))dT ¸ « ¨ ¸ ¨ ¸ 1 « © t  dl ¹ © t  dl ¹ ¬

r

¦

D

l

³

t

2

³

T

dT

t  dl

³

t  dt

(13)

º f T ( x(T )) H l f ( x(t ))ds » » ¼

ª § t · § t · « ¨ f T ( x(T ))dT ¸ H l ¨ f ( x(T ))dT ¸ « ¨ ¸ ¨ ¸ 1 « © t  dl ¹ © t  dl ¹ ¬

r

¦

D

³

l

³

³

t

2

³ (T  t  d ) f l

T

t  dl

º ( x(T )) H l f ( x(t ))dT » » ¼

By Lemma 3.1, we have 2f T (x(T)) Hl f (x(t)) d f T (x(T)) Hl f (x(T)) + f T (x(t)) Hl f (x(t)). Then the last term of Equation (13) is as follows, t

2

³ (T  t  d ) f l

T

( x (T )) H l f ( x(t ))dT

t  dl

t

d

³ (T  t  d ) ª¬ f l

t  dl

T

( x(T )) H l f ( x(T ))  f T ( x(t )) H l f ( x (t )) º¼ dT

t

³ (T  t  d ) f l

T

( x(T )) H l f ( x(T ))dT 

t dl

(14)

dl2 T f ( x(t )) H l f ( x(t )). 2

Substituting (14) into (13) yields r

V1 ( x(t )) d D

ª § t · § t · « ¨ f T ( x(T ))dT ¸ H l ¨ f ( x(T ))dT ¸ « ¨ ¸ ¨ ¸ 1 « © t  dl ¹ © t  dl ¹ ¬

¦ l

t

³

³

(15)

º f T ( x(t )) H l f ( x(t )) » . (T  t  dl ) f T ( x(T )) H l f ( x(T ))dT   2 » t  dl ¼

³

dl2

Similarly, the derivatives of Vi(x(t)), i = 2, 3, along the trajectories of Equation (8) are, respectively, as follows,

30

Z. Wang, J. Feng and G. Chen r

V2 ( x(t )) D

¦³ l

r

D

dl t ª dl º « sf T ( x(t )) H l f ( x(t ))ds  f T ( x (T )) H l f ( x(T ))dT ds » « » 1 ¬0 0 ts ¼

³³

dl t ª 2 º « dl f T ( x(t )) H f ( x(t ))  f T ( x(T )) H l f ( x(T ))ds » d T l « 2 » 1 ¬ t  dl t 0 ¼

¦ l

³

³

(16)

t ª 2 º dl T « f ( x(t )) H l f ( x(t ))  (dl  t  T ) f T ( x(T )) H l f ( x(T ))dT ». D « 2 » l 1 ¬ t  dl ¼ r

¦

³

N ª V3 ( x(t )) d 2( N  1) x T (t ) P « C ( x(t ))  Wf ( x(t ))  Wk f ( x(t  W k (t ))) «¬ k 1 t º N r Bl f ( x( s ))ds »  (D  E i ) ª¬ f T ( x(t ))Qi f ( x(t ))  » (17) l 1 t  dl ¼ i1  Ki f T ( x(t  W i (t )))Qi f ( x(t  W i (t ))) º¼  2D f T ( x(t )) D ª¬ C ( x(t ))  Wf ( x(t ))

¦

¦ ³

¦

N



¦

t

r

Wk f ( x (t  W (t ))) 

k 1

¦ ³ Bl

l 1

t  dl

f ( x( s))ds º¼.

By Assumptions 2.1 and 2.2, and Lemma 3.1, we have 2( N  1) x T (t ) PC ( x (t )) d 2( N  1) x T (t ) P*x(t ),

(18)

2D f T ( x (t )) DC ( x (t )) d 2D f T ( x (t )) D*x (t ) d 2D f T ( x (t )) Dīǻ 1 f ( x(t )),

(19)

t T

2D f ( x(t )) DBl

f ( x( s))ds d D f T ( x(t )) DBl Yl 1 BlT Df ( x(t ))

³

t  dl

§ t · § t · f T ( x( s ))ds ¸ Yl ¨ f ( x( s ))ds ¸ , D ¨ ¨ ¸ ¨ ¸ © t  dl ¹ © t  dl ¹

³

(20)

³

t

2( N  1) xT (t ) PBl

³

t  dl

f ( x( s))ds d ( N  1) ª¬ x T (t ) PBl H l1 BlT Px(t ) § t · § t ·º ¨ f T ( x( s))ds ¸ H l ¨ f ( x( s))ds ¸ » . ¨ ¸ ¨ ¸» © t  dl ¹ © t  dl ¹ »¼

³

Substituting (18)–(21) into (17) yields

³

(21)

Global stability of a class of Cohen–Grossberg neural networks with delays 31 V3 ( x (t )) d ( N  1) x T (t )(2 P* 

r

¦ PB H l

1 T l Bl P ) x(t )  2( N

 1) x T (t ) PWf ( x (t ))

l 1

 2( N  1) x T (t ) P

N

¦W

f ( x(t  W k (t )))

k

k 1

§ t · § t · T ¨ f ( x( s)) ds ¸H l ¨ f x ( s ) ds ¸  ( N  1) ¨ ¸ ¨ ¸ l 1 © t  dl ¹ © t  dl ¹ r

¦ ³

N



³

¦ (D  E ) ª¬ f i

T

i 1

( x(t ))Qi f ( x(t ))  Ki f T ( x (t  W i (t )))Qi f ( x(t  W i (t ))) º¼

(22)

 2D f T ( x(t )) Dīǻ 1 f ( x (t ))  2D f T ( x(t )) DWf ( x(t ))  2D f T ( x(t )) D

N

¦

r

Wk f ( x(t  W k (t )))  D

k 1

¦f

T

( x(t )) DBl Yl 1 BlT Df ( x(t ))

l 1

§ t · § t · ¨ f T ( x( s))ds ¸Yl ¨ f ( x( s))ds ¸ . D ¨ ¸ ¨ ¸ l 1 © t  dl ¹ © t  dl ¹ r

¦ ³

³

By Equation (10) and Lemma 3.1 again, we have for k = 1, …, N §  x T (t ) ¨ 2 P*  ¨ ©

r

¦ PB H l

l 1

·

1 T x (t )  2( N l Bl P ¸ ¸

¹

§ d ( N  1) 2 f T ( x(t ))W T P ¨ 2 P*  ¨ © §  x T (t ) ¨ 2 P*  ¨ ©

r

¦ PB H l

l 1

 1) x T (t ) PWf ( x(t ))

r

¦ l 1

(23)

1

· PBl H l1 BlT P ¸ PWf ( x(t )), ¸ ¹

·

1 T x(t )  2( N l Bl P ¸ ¸

 1) x T (t ) PWk f ( x(t  W k (t )))

¹

§ d ( N  1) 2 f T ( x(t  W k (t )))WkT P ¨ 2 P*  ¨ ©

r

¦ l 1

· PBl H l1 BlT P ¸ ¸ ¹

(24)

1

PWk f ( x (t  W k (t ))),

DKk f T ( x(t  W k (t )))Qk f ( x(t  W k (t )))  2D f T ( x (t )) DWk f ( x(t  W k (t ))) d

D T f ( x(t )) DWk Qk1WkT Df ( x(t )). Kk

Substituting (23)–(25) into (22), we have

(25)

32

Z. Wang, J. Feng and G. Chen § t · § t · ¨ f T ( x (T ))dT ¸ Yl ¨ ( x (T ))dT ¸ ¨ ¸¸ ¨¨ ¸¸ l 1 ¨ t  dl © ¹ © t  dl ¹ r § t · § t · ¨  ( N  1) f T ( x( s ))ds ¸ H l ¨ f ( x ( s ))ds ¸ ¨ ¸¸ ¨¨ ¸¸ l 1 ¨ t  dl © ¹ © t  dl ¹ r ª  f T ( x(t )) « ( N  1) 2 W T P (2 P*  PBl H l1BlT P ) 1 PW  « l 1 ¬ § N 1 D ¨ ( DWk Qk1WkT D  Qk )  2 D*' 1  2 DW ¨ K ©k 1 k r

V3 ( x (t )) d D

¦ ³

³

¦ ³

³

¦

N

¦E Q

k k

k 1

¦

r



¦ DB Y

l l

·º

1 T ¸ » Bl D

¸» ¹¼

l 1 N

f ( x (t ))

ª ( x (t  W k (t ))) «( N  1) 2WkT P (2 P*  « k 1 ¬ º  E kK k Qk » f ( x (t  W k (t ))). » »¼



¦f

(26)

T

r

¦ PB H l

1 T 1 l Bl P ) PWk

l 1

Combining (15) and (16) with (26), we have r

V ( x) d D

§ t · § t · ¨ f T ( x(T ))dT ¸ ( H l  Yl ) ¨ f ( x(T ))dT ¸ ¨ ¸¸ ¨¨ ¸¸ 1 ¨© t  dl ¹ © t  dl ¹

¦ ³ l

r

( N  1)

³

§ t · § t · ¨ f T ( x ( s ))ds ¸ H l ¨ f ( x( s ))ds ¸ ¨ ¸¸ ¨¨ ¸¸ 1 ¨© t  dl ¹ © t  dl ¹

¦ ³ l

³

r ª PBl H l1BlT P ) 1 PW  f T ( x(t )) « ( N  1) 2W T P(2 P*  « 1 l ¬ N § N 1 E k Qk  D ¨ ( DWk Qk1WkT D  Qk )  2 D*' 1  ¨ K k 1 ©k 1 k

¦

¦

¦

r

2 DW 

2 l

l 1

N



¦f k 1

T

r

¦ d H  ¦ DB Y l

l l

º

1 T » Bl D

» ¼

l 1

(27)

f ( x (t ))

ª ( x (t  W k (t ))) « ( N  1) 2WkT P(2 P*  « ¬

r

¦ PB H l

1 T 1 l Bl P ) PWk

l 1

º  E kK k Qk » f ( x(t  W k (t ))). » ¼

Now, we choose Ek > 0 such that ( N  1) 2 PWk

Ek t

2

r

(2 P* 

¦ PB H l

l 1

K k Om (Qk )

1 T 1 l Bl P )

.

(28)

Global stability of a class of Cohen–Grossberg neural networks with delays 33 Then r

( N  1) 2 WkT P(2 P* 

¦ PB H l

1 T 1 l Bl P ) PWk

 E kK k Qk d 0, k

1, …, N .

(29)

l 1

Meanwhile, ª f T ( x(t )) «( N  1) 2 W T P(2 P*  «¬


g (u (t  W i1 (t ))), …, g (u (t  W in (t )))@T ,

and

by

the

definition of Ei, i = 1, …, n, model (39) can be written in a vector-matrix form as, u (t )

where

ª  A(u (t )) «C (u (t ))  Wg (u (t ))  «¬

A(u (t ))

n

º

¦ E g (u(t  W (t )))  U »» , i

i

j 1

diag a1 (u1 (t ) , …, an (un (t )) and C (u (t ))

(41)

¼

c1 (u1 (t )),…, cn (un (t )) T .

Then similar to the proof of Theorem 3.1, Corollary 3.3 can be proved.

Ƒ

Remark 3.4. In Lu, Shen and Chung (2005), a sufficient condition guaranteeing the global exponential convergence of model (39) with constant delays is derived based on the Young Inequality. Although the conservativeness of the result in Lu, Shen and

38

Z. Wang, J. Feng and G. Chen

Chung (2005) is decreased by involving many unknown parameters, they do not have a systematic approach to tune those unknown parameters in advance. Correspondingly, the result in Lu, Shen and Chung (2005) is difficult to verify. In contrast, Corollary 3.2 of the present paper can be expressed in the form of LMI and is easy to check. Another sufficient condition is also given in Lu (2005) to ensure the global asymptotic stability of system (39) based on the M-matrix theory. Although the result in Lu (2005) is easy to verify, it is generally conservative due to no free parameters to be tuned. On the other hand, because the sign difference of the entries in connection matrices is ignored, the result in Lu (2005) did not consider the neuron’s excitatory and inhibitory effects on neural networks. Compared with the result in Lu (2005), Corollary 3.2 of the present paper considers the sign difference in connection matrix and is less conservative because it involves some suitable free parameters to be tuned. Remark 3.5. Theorem 9 in Guo and Huang (2006) presented a sufficient condition to ensure the global asymptotic stability of system (39) with wij = 0. Although the conservativeness of the result is decreased by involving some unknown parameters, these conditions are not easy to check, especially when the number of neurons in a neural network is increased significantly. Moreover, the condition in Theorem 9 of Guo and Huang (2006) has a strict restriction on the amplification function, which limits the application of Theorem 9 in Guo and Huang (2006). Furthermore, the sign difference of connection matrix is ignored in Guo and Huang (2006). In contrast, Corollary 3.2 of the present paper overcomes the disadvantages of Guo and Huang (2006). For the following neural network model, n

ui (t )

ci ui (t ) 

¦

N

wij g j (u j (t )) 

j 1

n

k ij

¦¦ w g (u (t  W j

j

kj (t )))  U i ,

(42)

k 1 j 1

where ci > 0, i = 1, …, n and other parameters are the same as those defined in Equation (3), we can directly obtain the following result form Theorem 3.1. Corollary 3.4. Suppose that Wkj (t ) d P kj  1. If there exist positive diagonal matrices D diag (d1 , …, d n ) and Qk holds, N

§ 1

¦ ¨© K k 1

k

diag (qk1 , …, qkn ) such that the following condition

· DWk Qk1WkT D  Qk ¸  2 DC ' 1  DW  W T D  0, ¹

(43)

then the equilibrium point of model (42) is globally asymptotically stable, independent of the size of time varying delays, where C =diag(c1, …, cn), Kk = min(1–Pkj), j =1, …, n and k = 1, …, N. When Wkj(t) = Wk(t) in model (42), we have the following results. Corollary 3.5. Suppose that Wk (t ) d P k  1. If there exist positive diagonal matrix D diag (d1 , …, d n ) and positive definite symmetric matrices Qk > 0 such that the following condition holds, N

§

1

¦ ¨© 1  P k 1

k

· DWk Qk1WkT D  Qk ¸  2 DC ' 1  DW  W T D  0, ¹

(44)

Global stability of a class of Cohen–Grossberg neural networks with delays 39 then the equilibrium point of model (42) with Wkj(t) = Wk(t) is globally asymptotically stable, independent of the size of time varying delays, k = 1, …, N.

4

Global robust stability result

In practice, there are inevitably some uncertainties due to the existence of modelling external disturbance and parameter fluctuations, which would lead to complex dynamical behaviours of neural networks Ji, Zhang and Guan (2007). Therefore, a good neural network should have robustness against such uncertainties. In this section, we will establish a sufficient condition to ensure the global robust stability of neural network (7) in the presence of such uncertainties. Consider the Cohen–Grossberg neural network (7) with uncertainties in the following form, xi (t )

ª  Ai ( xi (t )) «Ci ( xi (t ))  «¬ N



n

k ij

¦¦ (w

n

¦ (w

ij

 G wijk (t )) f j ( x j (t  W kj (t ))) 

k 1 j 1

Where

 G wij (t )) f j ( x j (t ))

j 1

G wij (t ), G wijk (t )

r

n

G bijl (t )

l ij

¦¦ (b l 1 j 1

and

t

 G bijl (t ))

³

t  dl

(45) º f j ( x j ( s))ds » , » ¼

denote the unknown connection weight

coefficients representing time varying parameter uncertainties, and other notations are the same as those in Equation (7). For convenience of description, we let G W (t ) (G wij (t ))nun , G Wk (t ) (G wijk (t )) nun and G Bl (t ) (G bijl (t ))nun , i, j = 1, …, n, l =1, …, r and k = 1, …, N. Assumption 4.1. Parameter uncertainties satisfy GW(t) = M0F(t)G0, GWk(t) = MkF(t)Gk and G Bl (t ) M l F (t )Gl , respectively, where M0, G0, Mk, Gk, M l and Gl are the known structural matrices of uncertainties, F (t) is an unknown time varying matrix function satisfying F T(t) F (t) d I, k = 1, …, N and l = 1, …, r. Definition 4.1. The equilibrium point of system (7) is said to be globally robustly stable with respect to the perturbations G wi j (t ), G wikj (t ) and G bil j (t ) if the equilibrium point of system (45) is globally asymptotically stable, i, j = 1, …, r and k = 1, …, N. Lemma 4.1 (see Xie, Fu and de Souza (1992)). If Y, F (t) and Z are real matrices of appropriate dimensions with / satisfying / = /T, then / + Y F(t)Z + (Y F(t)Z)T < 0 for all FT (t)F(t) d I, if and only if there exist a positive constant H > 0 such that / +H –1Y YT + H ZTZ < 0. Theorem 4.1. Suppose that Wij (t ) d Pij  1. If there exist positive diagonal matrices D = diag (d1 , …, dn ) and Qi = diag (qi1 , …, qin ), positive definite symmetric matrices Hl > 0, positive consants H 0 ! 0, H i ! 0 and H l ! 0, such that the following linear matrix inequality holds,

40

;

Z. Wang, J. Feng and G. Chen ª :0 « T « B1 D « # « « BT D « r «W T D « 1 « # « T « WN D « T « M1 D « # « « M rT D « « M 0T D « « # «M T D ¬ N

DB1 " DBr H1 #

" %

0 #

0

"

0 #

" #

0

DW1 " DWN

DM1 " DM r

0 #

" "

0 #

0 #

" "

0 #

Hr

0

"

0

0

"

0

0 #

Q1 #

" %

0 #

0 #

" "

0 #

"

0

0

"

QN

0

"

0

0 #

" #

0 #

0 #

" #

0 #

H1I #

" %

0 #

0

"

0

0

"

0

0

" H r I

0 #

" #

0 #

0 #

" "

0 0

0 #

" "

0 #

0

"

0

0

"

0

0

"

0

DM 0 " DM N º » 0 0 » " # " # »» 0 0 » " » 0 0 » " » # " # » » 0 0 »0 " » 0 0 » " # # # » » 0 0 » " » 0 » H 0 I " » # % # » 0 " H N I »¼

(46)

then system (7) is globally robustly stable with respect to uncertainties GW(t), GWi(t) and GBl(t) satisfying Assumption 4.1, where H l  H l  H l GlT Gl , Qi Ki Qi  H i GiT Gi , N

:0

¦

Qi  2 D*' 1  DW  W T D 

i 1

r

¦d

2 l Hl

 H 0 G0T G0 , Ki = min(1–Pij), l = 1, …, r,

l 1

j = 1, …, n and i = 1, …, N. Proof. Neural network (45) can be written in the following vector-matrix form, x (t )

ª N  ( x(t ))  W f ( x(t  W (t )))   A( x(t )) «C ( x(t ))  Wf k k « k 1 ¬

¦

r

t

¦ ³ Bl

l 1

t  dl

º f ( x( s))ds » , » ¼

(47)

where W W  G W (t ), Wk Wk  G Wk (t ) and Bl Bl  G Bl (t ), l = 1, …, r, k = 1, …, N and other notations are the same as those defined in system (8). Consider the Lyapunov–Krasovskii functional (34). In a similar manner to the proof of Theorem 3.2, the derivative of Equation (34) along the trajectories of Equation (47) is of the form, ª V ( x(t )) d 2 f T ( x(t )) D ««C ( x(t ))  (W  G W (t )) f ( x (t )) «¬ N



¦ i 1 N



r



¦ l 1

¦ l 1

¦ ª¬ f i 1

t

r

(Wi  G Wi (t )) f ( x(t  W i (t )))  T

( Bl  G Bl (t ))

³

t  dl

º f ( x( s))ds » » ¼

( x(t ))Qi f ( x(t ))  Ki f T ( x(t  W i (t )))Qi f ( x(t  W i (t ))) º¼

dl2 f T ( x (t )) H l f ( x(t )) 

r

ª t º « f T ( x ( s ))ds »H l « » 1 ¬ t  dl ¼

¦ ³ l

ª t º « f ( x( s))ds » « » ¬ t  dl ¼

³

Global stability of a class of Cohen–Grossberg neural networks with delays 41 d 2 f T ( x(t )) D*' 1 f ( x(t ))  2 f T ( x(t )) D(W  G W (t )) f ( x(t ))  2 f T ( x(t )) D

N

¦ (W  G W (t )) f ( x(t  W (t ))) i

i

i

i 1

 2 f T ( x(t )) D

t

r

¦

( Bl  G Bl (t ))

l 1

N



¦ i 1 r



¦

³

f ( x( s))ds

t  dl

(48)

ª f T ( x(t ))Qi f ( x(t ))  Ki f T ( x(t  W i (t )))Qi f ( x(t  W i (t ))) º ¬ ¼ dl2 f T ( x(t )) H l f ( x(t )) 

l 1

r

ª t º « f T ( x( s))ds »H l « » 1 ¬t  dl ¼

¦ ³ l

ª t º « f ( x ( s))ds » « » ¬ t  dl ¼

³

I T (t );I (t ), where T

t t ª º I (t ) « f T ( x(t )) f T ( x( s))ds ! f T ( x( s ))ds f T ( x(t  W1 (t )))! f T ( x(t  W N (t ))) » , « » t  d1 t dr ¬ ¼  ª: D ( B1  G B1 (t )) " D( Br  G Br (t )) D (W1  G W1 (t )) " D(WN  G WN (t )) º « »  H1 " " 0 0 0 «* » «# » # % # # " # « » ; «* Hr 0 0 0 " " », «* » K1Q1 0 0 0 " " « » # # # # % # «# » « » K  * 0 0 0 Q " " N N ¬ ¼

³

 ȍ

³

2 D*' 1  D (W  G W (t ))  (W  G W (t ))T D 

r

¦ l 1

dl2 H l 

N

¦Q . i

i 1

Where the symbol * denotes the symmetric part of the corresponding element in a matrix. Obviously, if ;  0, then V ( x(t ))  0, for I (t) z 0. Note that the following inequality holds,

;

DB1 ª :1 « T « B1 D  H1 « # # « T « Br D 0 « T 0 «W1 D « # # « «W T D 0 ¬ N

" DBr " %

0 #

" Hr " #

0 #

"

0

DW1

"

0 #

" "

0

"

K1Q1 " # % 0

"

DWN º » 0 » # » » 0 » » 0 » # »» K N QN »¼

42

Z. Wang, J. Feng and G. Chen ª DM 1F (t )G1 < « T 0 « ( M 1 F (t )G1 ) D « # # « T «  ( M r F (t )Gr ) D 0 « T « ( M 1 F (t )G1 ) D 0 « # # « «( M F (t )G )T D 0 N ¬ N DB1 " DBr DW1 ª :1 « T 0 0 « B1 D  H1 " « # # % # # « T « Br D 0 " Hr 0 « T 0 " 0 K1Q1 «W1 D « # # # # # « «W T D 0 " 0 0 ¬ N

" DM r F (t )Gr " %

0 #

"

0

" #

0 #

"

0

" " " " " % "

DM1 F (t )G1 " DM N F (t )GN º » 0 0 " » » # " # » » 0 0 " » » 0 0 " » # % # » » 0 0 " ¼

DWN º » 0 » # » » 0 » » 0 » # »» K N QN »¼

ªG0T º ª DM 0 º « » « 0 » « 0 » « »  « 0 » F (t ) >G0 0 0 " 0 @  « 0 » F T (t ) ª¬ M 0T D 0 0 " 0 º¼ « » « » « # » « # » « 0 » «¬ 0 »¼ ¬ ¼ ª 0 º ª DM1 º « T» « » «G1 » « 0 »  « 0 » F (t ) ª¬ 0 G1 0 " 0 º¼  « 0 » F T (t ) ª¬ M1T D 0 0 " 0 º¼  ! « » « » « # » « # » « 0 » « 0 » ¬ ¼ ¬ ¼

ª 0 º « 0 » ª DM r º « » « » « 0 » « 0 » « »  « 0 » F (t ) ª¬ 0 0 0" Gr " 0 º¼  « # » F T (t ) ª¬ M rT D 0 0" 0º¼ « » «G T » « # » « r » « 0 » « # » ¬ ¼ « » ¬ 0 ¼

Global stability of a class of Cohen–Grossberg neural networks with delays 43 ª 0 º ª DM 1 º « 0 » « 0 » « » « » « # »  « 0 » F (t ) > 0 0 " G1 " 0@  « T » F T (t ) ¬ª M1T D 0 0 " 0¼º  " « » «G1 » « # » « # » «¬ 0 »¼ « » «¬ 0 »¼ ª DM N º « 0 » « »  « 0 » F (t ) > 0 0 0" GN @  « » « # » «¬ 0 »¼

(49)

ª 0 º « 0 » « » « 0 » F T (t ) ª M NT D 0 0 " 0 º  0, ¬ ¼ « » « # » «G T » ¬ N¼

where )

N

DM 0 F (t )G0  ( M 0 F (t )G0 )T D, :1

¦

Qi  2 D*' 1  DW  W T D 

i 1

r

¦d

2 l H l.

l 1

From Lemma 4.1, (49) holds for all F T(t) F (t) d I if and only if there exist constants H l ! 0, H i ! 0, i = 0, 1, …, N and l = 1, …, r, such that

;

DB1 ª :1 « T « B1 D  H1 « # # « T « Br D 0 « T 0 «W1 D « # # « «W T D 0 ¬ N

" DBr " %

DW1

"

0 #

" "

0

"

0 #

" Hr " #

0 #

"

0

K1Q1 " # % 0

"

DWN º » 0 » # » » 0 » » 0 » # »» K N QN »¼

ªG0T º ª DM 0 º « » « 0 » « 0 » « » H 01 « 0 » ª¬ M 0T D 0 0 " 0 º¼  H 0 « 0 » >G0 0 0 " 0@ « » « » « # » « # » « 0 » «¬ 0 »¼ ¬ ¼ ª DM1 º « » « 0 » H11 « 0 » ª¬ M1T D 0 0 " 0 º¼  H1 « » « # » « 0 » ¬ ¼

ª 0 º « T» «G1 » « 0 » ª 0 G1 0 " 0 º  " ¼ « »¬ « # » « 0 » ¬ ¼

44

Z. Wang, J. Feng and G. Chen ª 0 º « 0 » ª DM r º « » « » « 0 » « 0 » « » 1 « T ª º H r 0 » ¬ M r D 0 0 " 0 ¼  H r « # » ¬ª0 0 0 " Gr " 0 ¼º « » «G T » « # » « r » « 0 » « # » ¬ ¼ « » ¬ 0 ¼ ª 0 º ª DM1 º « 0 » « 0 » « » « » H11 « 0 » ª¬ M1T D 0 0 " 0 º¼  H1 «G1T » > 0 0 G1 " 0@  " « » « » « # » « # » « 0 » «¬ 0 »¼ ¬ ¼ ª DM N º « 0 » « » H N1 « 0 » ¬ª M NT D 0 0 " 0 ¼º  H N « » « # » «¬ 0 »¼

ª 0 º « 0 » « » « 0 » > 0 0 0 " GN @  0. « » « # » «G T » ¬ N¼

(50)

Rearranging (50), it yields

;

DB1 ª :2 « T T « B1 D  H1  H1G1 G1 « # # « « BT D 0 « r «W1T D 0 « # « # « T 0 ¬WN D

where :2

r

¦H

l

1

"

DBr

DW1

"

" %

0 #

0 #

" "

0

"

"  H r  H r GrTGr " #

0 #

K1Q1  H1G1TG1 #

" %

"

0

0

"

DM l M lT D 

l 1

N

¦H

DWN

º » 0 » » # » » 0 » » 0 » # » » T K N Qn  H N GN GN ¼

1 1 1 T T i DM i M i D  H 0 DM 0 M 0 D  H 0 G0 G0

 :1. By Schur

i 1

complement Boyd et al. (1994), (51) is equivalent to condition (46). This completes the proof of Theorem 4.1. Ƒ

Remark 4.1. Theorem 4.1 is a generalisation of Theorem 3.2. However, the generalisation of Theorem 3.1 to the case of robust stability is not a trivial problem.

5

Illustrative examples

In this section, we will use two examples to show the effectiveness of the obtained results.

Global stability of a class of Cohen–Grossberg neural networks with delays 45 Example 5.1. Let us consider a third-order Cohen–Grossberg neural network (8) with constant delays, where N = 1 and r = 1. The network parameters are as follows: ª3 0 0 º ª 1.3 1.8 1.5 º « » * « 0 3 0 » , W «« 2.1 1.5 1.2 »» , «¬ 0 0 2 »¼ «¬ 0.1 0.5 0.7 »¼ ª 0.8 1.2 0.1 º ª 1.5 0.2 0.1º « » W1 « 0.2 0.4 0.6 » and B1 «« 0.3 0.7 0.3»» . «¬ 0.8 0.1 1.2 »¼ «¬ 1.6 1.4 0.5»¼

Applying Theorem 3.1 of the present paper, we have p

Q1

Y1

0 0 º ª 28.9949 « 0 30.7469 0 »» , D « «¬ 0 0 31.0704»¼ 0 0 º ª 28.7337 « 0 88.7774 0 »» , H1 « «¬ 0 0 83.5945»¼ ª 123.5033 8.9425 20.4776 º « 8.9425 106.8143 1.9019 »» . « «¬ 20.4776 1.9019 58.9161 »¼

0 0 º ª 48.5176 « 0 68.8154 0 »» , « «¬ 0 0 37.1483»¼ ª 159.7671 27.2873 38.2528 º « 27.2873 177.5277 1.9119 »» and « «¬ 38.2528 1.9119 97.7935 »¼

Therefore, the Cohen–Grossberg neural network (8) is globally asymptotically stable, independent of amplification functions and time delays. Now we consider Cohen–Grossberg neural network (8) with uncertainties, where

G W (t ) M 0 F (t )G0

G W1 (t )

M1 F (t )G1

G B1 (t ) M1 F (t )G1

ª 0.2 º « 0.1 » F (t ) 0.1 0.1 0.2 , > @ « » «¬ 0.05»¼ ª 0.3 º « 0.1 » F (t ) 0.1 0.5 0.3 , > @ « » «¬ 0.2 »¼ ª 0.1 º « 0.02» F (t ) 0.03 0.3 0.1 , > @ « » ¬« 0.01¼»

F(t) = sin(t), d1 = 1, ǻ = 0.5 diag (1, 1, 1) and Ai(xi(t)) are some kinds of amplification functions satisfying Assumption 2.3, whose exact values of lower and upper bounds may be unknown, i = 1, 2, 3. Applying Theorem 4.1 of the present paper, we have

Z. Wang, J. Feng and G. Chen

46

0 0 º 0 0 º ª1.3904 ª 0.7967 « 0 », Q « 0 2.2668 0 3.0993 0 »» , 1 « » « «¬ 0 «¬ 0 0 1.0499 »¼ 0 2.8060 »¼ 0.8619 0.9641º ª 4.3407 « 0.8619 5.2846 0.2143»» , H  3.2679, H 0.9184 and H « «¬ 0.9641 0.2143 2.0068»¼

D

H1

2.5199.

Therefore, in the case of parameter uncertainties, the Cohen–Grossberg neural network (8) is globally robustly stable, independent of amplification functions and time delays.

Example 5.2. Consider the following Cohen–Grossberg neural networks with two neurons, u1 (t )

a1 (u1 (t )[ u1 (t )  0.5 g1 (u1 (t ))  g 2 (u2 (t ))  0.78 g1 (u1 (t  W11 ))  0.2 g 2 (u2 (t  W12 ))  1],

u2 (t )

a2 (u2 (t )[u2 (t )  0.1g1 (u1 (t ))  g 2 (u2 (t ))  0.9 g1 (u1 (t  W 21 ))

(52)

 0.2 g 2 (u2 (t  W 22 ))  2],

where gi(ui(t)) = 0.5( |ui(t) + 1| – | ui(t) – 1| ), IJ11 = 1, IJ12 = 2, IJ21 = 0.5 and IJ22 = 4, ai(ui(t)) are some kinds of amplification functions satisfying Assumption 2.3, whose exact values of lower and upper bounds may be unknown, i = 1, 2. Obviously, the results in Chen (2006), Guo and Huang (2006), cannot be applied to this example. Pertaining to this example, the result in Lu, Shen and Chung (2005) and Theorem 2 in Chen and Rong (2004) are not satisfied. In this example, '

*

ª1 0 º «0 1» , W ¬ ¼

ª 0.5 1 º « 0.1 1 » , E1 ¬ ¼

ª 0.78 0.2º and E2 « 0 0 »¼ ¬

0 º ª0 « 0.9 0.2 » . ¬ ¼

Applying Corollary 3.3 of the present paper, we have D

0 º ª6.1842 , Q1 « 0 8.0659»¼ ¬

0 º ª 4.7152 and Q2 « 0 3.6839 »¼ ¬

0 º ª 4.2247 . « 0 3.4643»¼ ¬

Therefore, the neural network (52) is globally asymptotically stable. Moreover, if we let ai(ui(t)) Ł 1 in Equation (52), I = 1, 2, Theorems 1 and 2 in Liao and Wang (2003), Theorem 1 in Zeng, Wang and Liao (2003), and Theorem 3 in Chen (2001) are not satisfied, either.

6

Conclusions

Two sufficient conditions are derived to ensure the global asymptotic stability of a general class of Cohen–Grossberg neural networks with both multiple time varying delays and continuously distributed delays based on the LMI technique, which are independent of the size of the time varying delays and amplification functions. The results are extended to some special cases of Cohen–Grossberg neural networks. Moreover, a sufficient criterion guaranteeing the global robust stability is also derived for

Global stability of a class of Cohen–Grossberg neural networks with delays 47 the Cohen–Grossberg neural networks with uncertainties. Two numerical examples are employed to demonstrate the effectiveness of the obtained results. Furthermore, we can easily extend our result to the stabilisation problem of neural system by designing a state feedback controller using the similar way in this paper.

Acknowledgements This work was supported by the National Nature Science Foundation of China (60572070, 60521003, 60774048, 60774093), Open Project Foundation of Key Laboratory of Process Industry Automation, Ministry of Education China (PAL200503) and the China Postdoctoral Science Foundation ( 20060400962)

References Boyd, S., El Ghaoui, L., Feron, E. and Balakrishnan, V. (1994) Linear Matrix Inequalities in System and Control Theory. Studies in Applied Mathematics (pp.7–10). Philadelphia, PA: SIAM. Chen, T. (2001) ‘Global exponential stability of delayed Hopfield neural networks’, Neural Networks, Vol. 14, pp.977–980. Chen, Y. (2002) ‘Global stability of neural networks with distributed delays’, Neural Networks, Vol. 15, pp.867–871. Chen, T. and Rong, L. (2003) ǥDelay-independent stability analysis of Cohen–Grossberg neural networks,’ Physics Letters A, Vol. 317, pp.436–449. Chen, T. and Rong, L. (2004) ‘Robust global exponential stability of Cohen–Grossberg neural networks with time delays’, IEEE Trans. Neural Networks, Vol. 15, pp.203–206. Chen, Y. (2006) ‘Global asymptotic stability of delayed Cohen-Grossberg neural networks’, IEEE Trans. Circuits and Systems-I: Regular Papers, Vol. 53, pp.351–357. Chen, W-H. and Zheng, W. (2006) ‘Global asymptotic stability of a class of neural networks with distributed delays’, IEEE Trans. Circuits and Systems-I: Regular Papers, Vol. 53, pp.644–652. Cohen, M.A. and Grossberg, S. (1983) ‘Absolute stability and global pattern formation and parallel memory storage by competitive neural networks’, IEEE Trans. Systems, Man, and Cybernetics, Vol. SMC-13, pp.815–826. Crocco, L. (1951) ‘Aspect of combustion stability in liquid propellant rocket motors, part I: Fundamentals–low frequency instability with monopropellants’, J. Amer. Rocket Society, Vol. 21, pp.163–178. Fiagbedzi, Y.A. and Pearson, A.E. (1987) ‘A multistage reduction technique for feedback stabilizing distributed time-lag systems’, Automatica, Vol. 23, pp.311–326. Gopalsamy, K. and He, X. (1994) ‘Stability in asymmetric Hopfield nets with transmission delays’, Physica D: Nonlinear Phenimena, Vol. 76, pp.344–358. Gopalsamy, K. and He, X. (1994) ‘Delay-independent stability in bidirectional asociative neural networks’, IEEE Trans. Neural Networks, Vol. 5, pp.998–1002. Guo, S. and Huang, L. (2006) ‘Stability analysis of Cohen-Grossberg neural networks,” IEEE Trans. Neural Networks, Vol. 17, pp.106–116. Hopfield, J.J. (1984) ‘Neurons with graded response have collective computational properties like those of two-stage neurons’, in Proc. Nat. Acad. Sci. USA, Vol. 81, pp.3088–3092.

48

Z. Wang, J. Feng and G. Chen

Huang, H., Ho, D.W.C. and Cao, J.D. (2005) ‘Analysis of global exponential stability and periodic solutions of neural networks with time-varying delays’, Neural Networks, Vol. 18, pp.161–170. Ji, C., Zhang, H. and Guan, H. (2007) ‘New criteria for robust stability of Cohen–Grossberg neural networks with multiple delays’, Acta Electronica Sinica, Vol. 35, pp.135–140. Kolmanovskii, V.B. and Richard, J.P. (1999) ‘Stability of some linear systems with delays’, IEEE Trans. Automatic Control, Vol. 44, pp.984–989. Lam, J., Gao, H. and Wang, C. (2005) ‘+’model reduction of linear systems with distributed delay’, IEE Pro. Control Theory Appl., Vol. 152, pp.662–674. Liao, X., Li, C. and Wong, K.W. (2004) ‘Criteria for exponential stability of modified Cohen–Grossberg neural networks’, Neural Networks, Vol. 17, pp.1401–1414. Liao, X., Wu, Z. and Yu, J. (2002) ‘Stability analysis for cellular neural networks with continuous delay’, Journal of Computational and Applied Mathematics, Vol. 143, pp.29–47. Liao, X.X. and Wang, J. (2003) ‘Algebraic criteria for global exponential stability of cellular neural networks with multiple time delays’, IEEE Trans. Circuits and Systems-I: Fundamental Theory and Applications, Vol. 50, pp.268–275. Liu, P. and Han, Q-L. (2006) ‘On stability of recurrent neural networks–An approach from Volterra integro-differential equations’, IEEE Trans. Neural networks, Vol. 17, pp.264–267. Lu, H. (2005) ‘Global exponential stability analysis of Cohen–Grossberg neural networks’, IEEE Trans. Circuits and Systems-II: Express Brief, Vol. 52, pp.476–479. Lu, H., Shen, R. and Chung, F-L. (2005) ‘Global exponential convergence of Cohen–Grossberg neural networks with time delays’, IEEE Trans. Neural Networks, Vol. 16, pp.1694–1696. Lu, W. and Chen, T. (2003) ‘New conditions on global stability of Cohen–Grossberg neural networks’, Neural Computation, Vol. 15, pp.1173–1189. Marcus, C.M. and Westervelt, R.M. (1989) ‘Stability of analog neural networks with delay’, Physical Review A, Vol. 39, pp.347–359. Niculosu, S.I. (2001) Delay Effects on Stability–A Robust Control Approach. London: Springer-Verlag. Richard, J.P. (2003) ‘Time-delay system: An overview of some recent advances and open problems’, Automatica, Vol. 39, pp.1667–1694. Roska, T., Wu, C.W. and Chua, L.O. (1993) ‘Stability of cellular neural networks with dominant nonlinear and delay-type template’, IEEE Trans. Circuits and Systems-I: Fundamenal Theory and Applications, Vol. 40, pp.270–272. Roska, T., Wu, C.W., Balsi, M. and Chua, L.O. (1992) ‘Stability and dynamics of delay-type general and cellular neural networks’, IEEE Trans. Circuits and Systems-I: Fundamenal Theory and Applications, Vol. 39, pp.487–490. Wang, L. (2005) ‘Stability of Cohen–Grossberg neural networks with distributed delays’, Applied Mathematics and Computation, Vol. 160, pp.93–110. Wang, L. and Zou, X. (2002) ‘Harmless delays in Cohen–Grossberg neural network’, Physica D: Nonlinear Phenomena, Vol. 170, pp.162–173. Wang, L. and Zou, X. (2002) ‘Exponential stability of Cohen–Grossberg neural networks’, Neural Networks, Vol. 15, pp.415–422. Xie, L., Fridman, E. and Shaked, U. (2001) ‘Robust +’control of distributed delay systems with application to combustion control’, IEEE Trans. Automatic Control, Vol. 46, pp.1930–1935. Xie, L., Fu, M. and de Souza, C.E. (1992) ‘+’control and quadratic stabilization of system with parameter uncertainty via output feedback’, IEEE Trans. Automatic Control, Vol. 37, pp.1253–1256. Xiong, W. and Cao, J. (2005) ‘Global exponential stability of discrete-time Cohen–Grossberg neural networks’, Neurocomputing, Vol. 64, pp.433–446.

Global stability of a class of Cohen–Grossberg neural networks with delays 49 Xu. S., Lam, J., Chen, T. and Zou, Y. (2005) ‘A delay-dependent approach to robust +’filtering for uncertain distributed delay systems’, IEEE Trans. Signal Processing, Vol. 53, pp.3764–3772. Ye, H., Michel, A.N. and Wang, K. (1995) ‘Qualitative analysis of Cohen–Grossberg neural networks with multiple delays’, Physical Review E, Vol. 51, pp.2611–2618. Zeng, Z., Wang, J. and Liao, X.X. (2003) ‘Global exponential stability of a general class of recurrent neural networks with time-varying delays’, IEEE Trans. Circuits and Systems-I: Fundamenal Theory and Applications, Vol. 50, pp.1353–1358. Zhang, H. and Ji, C. (2005) ‘Delay-Independent globally asymptotic stability of Cohen–Grossberg neural networks’, International Journal of Information and Systems Sciences, Vol. 1, pp.221–228. Zhang, H., Ji, C. and Zhang, T. (2006) ‘Analysis of robust stability of Hopfield neural networks with multiple delays’, Acta Automatica Sinica, Vol. 32, pp.84–90. Zhang, H. and Wang, G. (2007) ‘New criteria of global exponential stability for a class of generalized neural networks with time-varying delays’, Neurocomputing, Vol. 70, pp.2486–2494.

50

Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, 2009

A novel Artificial Neural Network training method combined with Quantum Computational Multi-Agent System theory Xiangping Meng* Department of Electrical Engineering, Changchun Institute of Technology, 130012, PR China E-mail: [email protected] *Corresponding author

Jianzhong Wang Department of Information Engineering, Northeast Dianli University, 132012, PR China E-mail: [email protected]

Yuzhen Pi and Quande Yuan Department of Electrical Engineering, Changchun Institute of Technology, 130012, PR China E-mail: [email protected] E-mail: [email protected] Abstract: Artificial Neural Networks (ANNs) are powerful tools that can be used to model and investigate various complex and non-linear phenomena. In this study, we construct a new ANN, which is based on Multi-Agent System (MAS) theory and quantum computing algorithm. All nodes in this new ANN are presented as Quantum Computational (QC) agents, and these agents have learning ability. A novel ANN training method was proposed via implementing QCMAS reinforcement learning. This new ANN has powerful parallel-work ability and its training time is shorter than classic algorithm. Experiment results show that this method is effective. Keywords: ANN; MAS; Q-learning; quantum computing. Reference to this paper should be made as follows: Meng, X., Wang, J., Pi, Y. and Yuan, Q. (2009) ‘A novel Artificial Neural Network training method combined with Quantum Computational Multi-Agent System theory’, Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, pp.50–60. Biographical notes: Xiangping Meng received her BS and PhD from the Northeastern University, China, in 1983 and 2000, respectively, and an MS from the Northeastern Institute of Electric Power Engineering, China, in 1986. She has been working as a Postdoctor at the Jilin University from 2000 to 2002. She is working at Changchun Institute of Technology since 1986. Her research interests are in the areas of intelligent control, data mining, intelligent computing and reinforcement learning and so on. Copyright © 2009 Inderscience Enterprises Ltd.

A novel ANN training method combined with QCMAS theory

51

Jianzhong Wang received his Bachelor’s degree in 2004 from Dezhou University and now is a master student at Northeast Dianli University, China. His research interests are in the areas of intelligent control, intelligent computing, reinforcement learning. Yuzhen Pi received her Bachelor’s degree in 2004 and Master’s degree in 2007 from Northeast Dianli University, PR China and is currently a lecturer at Changchun Institute of Technology. Her research interests are in the areas of intelligent control, intelligent computing and reinforcement learning. Quande Yuan received his Bachelor’s degree in 2004 and Master’s degree in 2007 from Northeast Dianli University, PR China and is currently a lecturer at Changchun Institute of Technology. His research interests are in the areas of intelligent control, intelligent computing and reinforcement learning.

1

Introduction

Artificial Neural Networks (ANNs) are powerful computational modelling tools that have recently emerged and found extensive acceptance in many disciplines for modelling complex real-world problems (Ge, Lee and Harris, 1998; Zhang and Je, 2004). ANN may be defined as structures comprised of densely interconnected adaptive simple processingelements (called artificial neurons or nodes), which are capable of performing massively computations for data processing and knowledge representation (Zhang, Zhang and Xiao, 2000; Quan and Zhang, 2001; ønan and Elif, 2005; Sonmez and Gokceoglu, 2006). Although ANNs are drastic abstractions of the biological counterparts, the idea of ANNs is not to replicate the operation of the biological systems but to make use of what is known about the functionality of the biological networks for solving complex problems. The ANNs have gained great success, however, there are still some limitations in ANNs (Ge, Lee and Harris, 1998; Zhang and Je, 2004; Zhang and Meng, 2004). Such as: 1

most of the ANN is not really distribute, so its nodes or neurons cannot work in parallel

2

training time is long

3

the nodes number is limited by the capability of computer.

To solve these problems, we try to reconstruct the ANN using Quantum Computational Multi-Agent System (QCMAS; Klusch, 2003) which consists of quantum computing agents. Multi-agent technology is a hotspot in the recent study on artificial intelligence. The concept of agent is a natural analogy of real world. Quantum computing is capable of processing huge numbers of quantum states simultaneously in parallel (‘quantum parallelism’). In theory, QC is able to process all possible points in a 2N search space (of N-bit strings). If N is large, 2N is gigantic, making a systematic search with a classical computer impossible. Quantum searches can be proven faster compared with classical

52

X. Meng et al.

searches. In certain cases, QCMAS have more powerful computational ability than any MAS by means of properly designed and integrated QC agents. In this paper, we give a new method to construct and train ANN, which based on QCMAS theory and reinforcement learning algorithm. All nodes in this new ANN were presented as QC agents, and these QC agents have a learning ability via implementing reinforcement-learning algorithm. We use QCMAS reinforcement learning algorithm as this new neural network’s learning rules.

2

Related work

2.1 Quantum computing 2.1.1 Quantum bits The unit of quantum information is the quantum bit (qubit). The qubit is a vector in a 2D complex vector space with inner product, represented with quantum state. It is an arbitrary superposition state of two-state quantum system (Aharonov, 1998; Benenti and Casati, 2005): \

2 D 0 E 1 , D  E

2

(1)

1

where D and E are complex coefficients. 0 and 1 correspond to classical bit 0 and 1. 2

2

D and E represent the occurrence probabilities of 0 and 1 , respectively when the project \

is measured, the outcome of the measurement is not deterministic. The value

of classical bit is either Boolean value 0 or value 1, but qubit can simultaneously store 0 and 1, which is the main difference between classical and quantum computation.

2.1.2 State space The Hadamard transform (or Hadamard gate) is one of the most useful quantum gates and can be represented as (Nielsen and Chuang, 2000): H {

1 ª1 « 2 ¬1

1 º  1 »¼

(2)

Through the Hadamard gate, a qubit in the state 0 is transformed into a superposition state of two states, i.e. H 0

1

2 0 1

2 1 . Similarly, a qubit in the state 1 is

transformed into the superposition state H 1

1

2 0 1

2 1 , i.e. the magnitude of

the amplitude in each state is 1 2 but the phase of the amplitude in the state 1 is inverted. In classical probabilistic algorithms, the phase has no analogue since the amplitudes are in general complex numbers in quantum mechanics. Now consider a quantum system described by n qubits, it has 2n possible states. To prepare an equally weighted superposition state, initially let each qubit lie in the state 0 , then we can perform the transformation H on each qubit independently in sequence and thus change the state of the system. The state transition matrix representing this operation will be of

A novel ANN training method combined with QCMAS theory

53

2n u 2n dimensions and can be implemented by n shunt-wound Hadamard gates. This process can be represented into: H …n

n 

 00 " 0

1 2n

n P 11"1

¦

a.

(3)

a 00"0

2.1.3 Grover’s searching algorithm Grover’s searching algorithm is well-known as searching an unstructured database of N = 2n records for exactly one record which has been specifically marked. Classically, this process would involve querying the database, at best once and at worst N times, so on average N/2 times. In other words, the classical search scales O(N) and is dependent upon structuring within the database (Grover, 1996). Grover’s algorithm offers an improvement to ( O( N ) and works just as well in a structured database as a disordered one. As we know, the probability to get a certain vector in a superposition in a measurement is the square of the norm of its amplitude. Grover’s idea was to represent all numbers in {0,1} via a superposition I of n qubits, and shift their amplitudes so that the probability to find an element out of L in a measurement is near 1. To achieve this, a superposition of n qubits in which all the elements having the same amplitude is created. Then the following operations are applied alternately: x

amplitudes of all elements  L are reflected at zero

x

amplitudes of all elements are reflected at the average of all amplitudes.

Those two operators together are called the Grover-operator. The other fundamental operation required is the conditional phase shift operation which is an important element to carry out Grover iteration. According to quantum information theory, this transformation may be efficiently implemented on a quantum computer. The conditional phase shift operation does not change the probability of each state since the square of the absolute value of the amplitude in each state stays the same.

2.2 Quantum algorithm Much of the present excitement about quantum computing is due to the discovery that certain problems appear to be easier to solve (in the computational complexity sense) on a quantum computer than on a classical computer. Some famous quantum algorithms show powerful efficiencies on obtaining solution. The most famous quantum algorithms based upon Fourier transform is FACTORING which Shor’s quantum algorithm solves in time O(n2 log n log log n) (Shor, 1994).The best classical algorithm known for FACTORING is much slower, requiring time O(exp (cn1/3 log2/3 n)). Grover’s search algorithm (Grover, 1997) depends upon the two techniques exploited by Shor and Grover: quantum fast transforms and amplitude amplification (Grover, 1998), respectively, can achieve a quadratic speedup in unsorted database searching over the best known classical algorithms and its experimental implementations have also been demonstrated using Nuclear Magnetic Resonance (NMR) Spectroscopy and quantum optics. Some researchers investigate the combination of quantum mechanics and classical

54

X. Meng et al.

computer, several quantum reinforcement learning algorithms have been proposed (Dong and Chen, 2005), to find a method of tradeoff of exploration and exploitation efficiently through applying Grover operator to reinforce some actions.

2.3 Quantum Computational Multi-Agent System (QCMAS) 2.3.1 QC agent A QC agent̓extends an intelligent agent by its ability to perform both classical agent, and quantum computing to accomplish its goals individually, or in joint interaction with other agents. QC agents are supposed to exploit the power of quantum computing to reduce the computational complexity of certain problems.

2.3.2 QC Multi-Agent System A QCMAS is a MAS that consists of QC agents that can interact to jointly accomplish their goals. QCMAS can be computationalҏmore powerful than any MAS by means of properly designed and integrated QC agents.

2.4 Multi-agent Q-learning Learning behaviours in a multi-agent environment is crucial for developing and adapting MASs. Reinforcement learning has been successful in finding optimal control policies for agents operating in a stationary environment, specifically a Markov decision process (Sutton and Barto, 1998; Shoham and Powers, 2003).Q-learning is a standard reinforcement learning technique (Watkins and Dayan, 1992; Hu and Wellman, 2003).

2.4.1 Single agent Q-learning In single-agent systems, Q-learning possesses a firm foundation in the theory of Markov decision processes. The basic idea behind Q-learning is to try to determine which actions, taken from which states, lead to rewards for the agent (however, these are defined), and which actions, from which states, lead to the states from which said rewards are available, and so on. The value of each action could be taken in each state, i.e. its Q value is a time-discounted measure of the maximum reward available to the agent by following a path through state space of which the action in question is a part. Q-learning consists on iteratively computing the values for the action-value function, using the following update rule: Qt 1 (st , at ) (1  D ) u Qt (st , at )  D u [rt  E u max Qt (st 1 , a)] b

(4)

where Qt (st, at) is the value function of the state-action pair (st, at) at moment t. D and E are the learning rate and discount factor, respectively, and rt is the reward value received as the result of taking action at at in state st. Moreover, as in the case of the single-agent Q-learning, we need a policy Si each agent i: S i : S o Ai , i  [1, N ].

(5)

A novel ANN training method combined with QCMAS theory

55

A policy S is a description of the behaviour of an agent. S (s, a) is the probability assigned to action a in state s. A deterministic policy is one that assigns probability 1 to some action in each state. A policy S for an agent can be evaluated by computing the long-run value the agent can expect to gain. Q-learning will converge to a best-response independently of the agent’s behaviour as long as the conditions for convergence are satisfied. If D decreases appropriately with time and each state-action pair would be visited infinitely often in the limit, then this algorithm will converge to a best-response for all s  S and a  A(s) with probability one.

2.4.2 Multi-agent Q-learning Multi-agent environments are inherently non-stationary since the other agents are free to change their behaviour as they also learn and adapt. In multi-agent Q-learning, the G Q-function of agent i is defined over states and joint-action vectors a ( a1 , a 2 , ! , a n ), rather than state-action pairs. The agents start with arbitrary Q values, and the updating of Q value proceed as following: G G (6) Qti1 (s, a) (1  D )Qti (s, a )  D[rti  E uV i (st 1 )] where Vi(st+1) is state value functions, and V i ( st 1 )

G max f i (Qti ( st 1 , a ))

ai Ai

(7)

In this generic formulation, the keys elements are the learning policy, i.e. the selection G method of the action a , and the determining of the value function Vi(st+1), 0 d D  1 . The different action selection/value function computation methods generate different multiagent learning algorithms. We just let K

S i (st 1, ai ) argmax Q(s, a) i

(8)

S

3

A novel ANN training method

ANN is a powerful data modelling tool that is able to capture and represent complex input/output relationships and characteristics, such as associativity, self-organisation, generalisability and noise-tolerance (Ge, Lee and Harris, 1998; Cheu and Srinivasan, 2004; Hooshdar and Adeli, 2004). ANN can be successfully employed in solving complex problems in various fields of mathematics, engineering, medicine, economics, meteorology, neurology and many others (Basheer and Hajmeer, 2000; Zhang and Meng, 2004). The fundamental processing element of a neural network is a neuron (node). Now we can construct the ANN using QCMAS theory: each neuron or node is a QC agent. We constructed a simple three-layer ANN whose topological diagram is shown in Figure 1. There are three types of QC agents: input nodes QC agents, output nodes QC agents and hidden layer QC agents. These QC agents can be not only on same computer, but also on different ones. The other QC agents can find the node QC agents that it need to link. The QCMAS can be distributed, that is to say QC agents can be running on different

56

X. Meng et al.

computers, so the number of QC agents is not limited. And every agent has learning ability via implementing reinforcement learning. Figure 1

A new neural network diagram

Succeed to the characteristic of the quantum computing and MAS, the new ANN is really distributed and its nodes or neurons can organise dynamically which can process information in parallel. The new ANN has more powerful computing ability than the classic ANNs. Hence, in addition to the network topology, an important component of most neural networks is a learning rule. A learning rule allows the network to adjust its connection weights in order to associate given input vectors with corresponding output vectors. During training periods, the input vectors are repeatedly presented, and the weights are adjusted according to the learning rule, until the network learns the desired associations. As discussed above, the new ANN’s training method can be regard as the QCMAS’s reinforcement learning.

3.1 Define states and actions According to the above method, we can define the states and actions in QC multi-agent reinforcement learning system, whose states may lie in a superposition state: s(m)

m P 11"1

¦

(9)

Cs s .

s 00"0

The mapping from states to actions is f (s) S : S o A , and can be defined as following:

f ( s)

as( n)

n P 11"1

¦

Ca a

s 00"0

where Cs and Ca is probability amplitudes of state s and action a , respectively.

(10)

A novel ANN training method combined with QCMAS theory

57

3.2 The process of QCMAS Q-learning The process of QCMAS Q-learning is shown in Table 1. Table 1

The process of QCMAS Q-learning

Initialise 1 Select initial learning rate D and discount factor E and let t = 0; 2 Initialise the state and action to the equal superposition state, respectively;

s (m)

m P 11"1

¦

Cs s , f ( s )

as( n )

s 00"0

3 For all states s ( m )

¦

Ca a

s 00"0

and actions as( n ) ,

Let Q0i ( s (0) , as(1) , as(2) , !, as( n ) )

S 0i ( s (0) , as(i ) )

n P 11"1

1 i (0) (i ) , S 0 ( s , as ) n

0; 1 i , C (s) n

0

Repeat the following process (for each episode) 1 For all states, observe as( n )

and get a , using formula (10);

2 Execute action a , observe reward ( rt(1) ,!, rt( n ) ) and new states st( m ) 3 Update Qti using formula (8) using Grover- operator 4 Update probability amplitudes, i.e. explore next action. Until for all states | ' V (S) |  İ

4

Experiment and result

_

The function to be learned is the boolean XOR function. This function is a simple but difficult one to learn because it is not linearly separable and therefore cannot be learned by a single layered network as was pointed out. We construct a simple three-layer networks (Figure 2) to training it to learn XOR. In the case of the XOR problem, those inputs and outputs are set by the truth table as shown in Figure 3. The training proceeds in five stages. Firstly, we create the network with random weights and random biases. Secondly, we set the activation of the two input nodes from the columns ‘x’ and ‘y’ in the table, and run the network forward. Thirdly, we compare the output produced by the network to the desired output ‘z’, and calculate the difference between the actual output and the desired output. This difference is the error signal. Fourthly, we change the weights of the connections that connect to the output node, and the bias of the output node. Fifthly, we pass the error back to the hidden layer, and change the biases and weights of those connections. Then the cycle repeats with new inputs and new outputs. The network trains until the average error (calculated over all

58

X. Meng et al.

four rows in the truth table) approaches zero. We train the ANN to solve the XOR problem both in class algorithm and the novel method. The experiment results are shown in Figure 4. Figure 2

Topological diagram of the XOR NN

Figure 3

The truth table of XOR function

Figure 4

The comparison between class algorithm and novel method (see online version for colours)

A novel ANN training method combined with QCMAS theory

5

59

Conclusions

In this paper, we introduced quantum theory, multi-agent reinforcement learning and ANN, respectively, and proposed a new method to construct ANNs with the QC agents based on the theories of quantum computing and MAS. We regard the new ANN as a QCMAS and use the QCMAS Q-learning to train the ANN. We adopt quantum-searching algorithm in agents’ action selection policy of multi-agent reinforcement learning, which can speed up the learning of the new ANN. The results show this method is effective. The combination of quantum computing methods and multi-agent reinforcement learning used in ANN is an attempt. With the development of quantum computing, MAS and ANN, we will continue attending this aspect, and more research will be done in the future work.

Acknowledgements The work in this paper was supported by Key project of the ministry of education of China for Science and Technology Research, China (ID: 206035).

References Aharonov, D. (1998) ‘Quantum computation’, Annual Review of Computational Physics VI. Singapore: World Scientific. Basheer, I.A. and Hajmeer, M. (2000) ‘Artificial neural networks: fundamentals, computing, design, and application’, In Journal of Microbiological Methods ,Vol. 43, pp.3–31. Benenti, G. and Casati, G. (2005) ‘Principles of quantum computation and information’, World Scientific, Vol. 1, pp.144–150. Cheu, R. and Srinivasan, D. (2004) ‘Training neural networks to detect freeway incidents by using particle swarm optimization’, Journal of Transportation Research Board, Vol. 1867, pp.11–18. Dong, D.Y. and Chen, C.L. (2005) ‘Quantum reinforcement learning’, Paper presented in the Proceeding of the Advances in Natural Computation: First International Conference, ICNC, Vol. 2, pp.686–689. Ge, S.S., Lee, T.H. and Harris, C.J. (1998) Adaptive Neural Network Control of Robotic Manipulators. World Scientific. Grover, L. (1996) ‘A fast quantum mechanical algorithm for database search’, Paper presented in the Proceedings of the 28th Annual ACM Symposium on Theory of Computation (pp.212–219). ACM Press. Grover, L.K. (1997) ‘Quantum mechanics helps in searching for a needle in a haystack’, Phys. Rev. Lett, Vol. 79, pp.325–328. Grover, L.K. (1998) ‘A framework for fast quantum mechanical algorithms’, Paper presented in the Proceedings of the 30th Annual ACM Symposium on Theory of Computing, Vol. 1, pp.53–62. Hooshdar, S. and Adeli, H. (2004) ‘Toward intelligent variable message signs in freeway work zones: neural network model’, Journal of Transportation Engineering, Vol. 130, pp.83–93. Hu, J.L and Wellman M.P (2003) ‘Nash Q-learning for general-sum stochastic games’, Journal of Machine Learning research. Vol. 1, pp.1–30. ønan, G. and Elif, D.Ü. (2005) ‘Feature saliency using signal-to-noise ratios in automated diagnostic systems developed for ECG beats’, Expert Systems with Applications, Vol. 28, pp.295–304.

60

X. Meng et al.

Klusch, M. (2003) ‘Toward quantum computational agents’, Agents and Computational Autonomy, Vol. 2969, pp.170–184. Nielsen, M.A. and Chuang, I.L. (2000) Quantum Computation and Quantum Information. New York, NY: Cambridge University Press. Quan, Y. and Zhang, H. (2001) ‘Modeling and control based on a new neural network model’, Paper presented in the Proceedings of the 2001 American Control Conference (pp.1928–1929). Airlinton, America. Shoham, Y. and Powers, R. (2003) ‘Multi-agent reinforcement learning: an introduction.cambridge: a critical survey’, Technical Report, Computer Science Department, Stanford University. Shor, P.W. (1994) ‘Algorithms for quantum computation: discrete logarithms and factoring’, Paper presented in the Proceedings of the 35th Symposium on Foundations of Computer Science (pp.124–134). IEEE Computer Society Press. Sonmez, H. and Gokceoglu, C. (2006) ‘Estimation of rock modulus: for intact rocks with an artificial neural network and for rock masses with a new empirical equation’, Int. J. Rock Mechanics and Mining Sciences, Vol. 43, pp.224–235. Sutton, R.S. and Barto, A.G. (1998) Reinforcement Learning: An Introduction. Cambridge: MIT Press. Watkins, C.J.C.H. and Dayan, P. (1992) ‘Q-learning’, Machine Learning, Vol. 8, pp.279–292. Zhang, H. and Je, C. (2004) Qualitative Analysis and Synthesis of Recurrent Neural Networks. Beijing, China: Science Press. Zhang, H. and Meng, X. (2004) Theory and Applications of Intelligent Control. Beijing, China: Mechanical Industry Press. Zhang, S., Zhang, H. and Xiao, W. (2000) ‘A kind of neural network adaptive control algorithm’, Journal of ShenYang University of Technology, Vol. 22, pp.343–345.

Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, 2009

Field Programmable Gate Array based floating point hardware design of recursive k-means clustering algorithm for Radial Basis Function Neural Network S.P. Joy Vasantha Rani*, P. Kanagasabapathy and L. Suganthi Madras Institute of Technology, Anna University, Chennai 600 044, India Fax: +91 44 22232043 E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: In this paper, the hardware design of Radial Basis Function Neural Network (RBFNN), which is capable of dealing with floating point arithmetic operations and hardware architecture to calculate the centres of hidden layer using the k-means algorithm are proposed. The RBFNN are very much useful in adaptive control applications. Hardware implementation of neural network will give much faster training than traditional processors and also relatively inexpensive. The architecture of RBFNN is based on a computational model whose main features are: the capability to exploit the inherent parallelism of neural networks and to increase or decrease the number of neurons, aiming flexibility of the network. The design has been done with Very High Speed Integrated Circuit Hardware Description Language (VHDL). The results are verified and analysed in the MATLAB environment. In this work, the floating point hardware gives best precision and also very much useful for wide dynamic range requirements. The design was tested and synthesised with the help of Virtex-II pro device. The simulation and synthesis results show the effectiveness and speed of training. Keywords: adaptive control; Field Programmable Gate Array; FPGA; floating point hardware; k-means clustering algorithm; local approximation; multilayer neural network; parallel processing; Radial Basis Function; RBF; Very High Speed Integrated Circuit Hardware Description Language; VHDL. Reference to this paper should be made as follows: Rani, S.P.J.V., Kanagasabapathy, P. and Suganthi, L. (2009) ‘Field Programmable Gate Array based floating point hardware design of recursive k-means clustering algorithm for Radial Basis Function Neural Network’, Int. J. Intelligent Systems Technologies and Applications, Vol. 6, Nos. 1/2, pp.61–76. Biographical notes: S.P. Joy Vasantha Rani received her BE degree in Electronics and Communication Engineering from the Madurai Kamaraj University in 1993 and ME degree in Power Electronics from the Anna University, Chennai, India, in 1995. Since 1996, she has been experienced in teaching and now she is a Lecturer at the Department of Electronics Engineering, Madras Institute of Technology, Chennai, India. Her research interests are in the hardware implementation of fuzzy and neural networks and their uses in adaptive control. Copyright © 2009 Inderscience Enterprises Ltd.

61

62

S.P.J.V. Rani, P. Kanagasabapathy and L. Suganthi P. Kanagasabapathy, Dean, received his PhD in the field of Electrical Measurements from IIT, Madras, India in 1980. He has got 27 years of experience in teaching. He has published papers in 30 international journals and 20 international conferences. His research areas include intelligent control, transducers and measurements. L. Suganthi received her BE degree in Electronics and Communication Engineering from the Manonmaniam Sundaranar University, Tamil Nadu, India in 2002 and ME degree in Communication and Networking from the Anna University in 2007. She worked in the areas of embedded systems and network engineering.

1

Introduction

Intelligent systems adopt soft computing techniques encompassing neural networks, fuzzy logic, genetic algorithm and expert systems to solve complex problems by mimicking human reasoning. Even if soft computing techniques are extremely accurate in solving specific tasks, they do not fulfil all requirements of complex industrial applications, e.g. real time requirements and computational complexity. The potential usefulness of neural networks in control is due to two main features: learning capability and function approximation capability. Recently, considerable effort has been invested in the use of neural networks for non-linear control and parameter identification (Chen et al., 1990; Ge, Lee and Harris, 1998). The parameters of the identification model are estimated adaptively so that the difference between the actual plant output and the output produced by the model is minimised. The identification process should be capable of producing an accurate model of the non-linear system without any prior knowledge of the system dynamics. One of the most useful approaches to non-linear system identification is based on Non-linear Auto Regressive Moving Average with exogenous inputs (NARMAX) modelling. When the outputs of the systems are assumed to be error-free, a NARMAX model can be reduced to non-linear auto regressive with exogenous inputs form. Various neural network architectures are used to perform the non-linear mapping task of identification process. The principal types of neural networks used for identification and control problems are the Multilayer Perceptron (MLP) neural networks with sigmoidal units and the Radial Basis Function Neural Networks (RBFNN; Ge, Hang and Zhang, 1999; Pereira, Henriques and Dourado, 2000; Youmin and Rong, 1996). In MLP, each neuron performs a biased weighted sum of its inputs and passes this through an activation function to produce its output, and the neurons are arranged in a layered feed forward topology and the non-linearity of the model is embedded only in the hidden layer of the network. As outlined by Simon (1994), MLPs construct global approximations to non-linear input–output mapping. They are capable of generalisation in regions of the input space where little or no training data are available. On the other hand, RBF networks construct local approximations to non-linear input–output mapping, with the result that these networks are capable of fast learning and reduced sensitivity to the training data. Generally, RBFNN have been applied to a large diversity of applications including interpolation (Broomhead et al., 1990), chaotic time series modelling, system identification (Mashor, 1995; Pereira, Henriques and Dourado, 2000; Primoz and Grabec, 2000), control engineering, electronic device parameter modelling, image restoration, motion estimation and moving object segmentation, data fusion, etc. The advantage of RBF networks is that when the basis functions are appropriately fixed,

Floating point hardware design of recursive k-means clustering algorithm

63

the network outputs become linear functions of the output layer weights. Thompson and Kramer (1994) found that the neural network using RBFs could be more easily trained to model a process if some prior knowledge of the process is available. When selecting a network type for function approximation, a compromise between several desired features must be made. Some of these are: x

local or global approximation

x

accuracy and generalisation capability

x

memory usage, computational load and parallel implementation

x

identification method (speed of convergence, etc.)

x

online specific features (suitability for online identification, etc.).

Most RBF networks construct a local approximation of multi-input multi-output function analogous to fit least squares splines through a set of data points using basis functions. Training of the RBFNN is performed by selecting the centres of hidden layer neurons and then by estimating the output layer weights. The centres of the hidden layer neurons of the RBFNN are selected by using some clustering algorithms like, k-means (Sing et al., 2003), fuzzy c-means (FCM; Youchan and Yujun, 2006), etc. However, in the case of RBF networks, the k-means clustering algorithm is the most commonly used. Moody and Darken (1989) describe the use of k-means clustering algorithm. The weights of output layer neurons are updated using recursive Least Mean Squares (LMS) method (Pereira, Henriques and Dourado, 2000) and gradient descent approach (Mercedes, Joaquın and Carlos, 2006). The massively parallel nature of a neural network makes it potentially fast for the computation of certain tasks. This same feature makes a neural network ideally suited for implementation using Very Large Scale Integration (VLSI) technology. In order to overcome the short comings of traditional processors in control applications such as sequential processing due to programme execution and insufficient computing power, the hardware implementation of neural networks on Field Programmable Gate Array (FPGA) was proposed by Hon and Chuan (1993) and Ayala et al. (2002). The hardware implementation of neural networks on FPGA with fixed-point arithmetic was proposed and implemented by Cesare and Meyer (1991) and Ralf, Tim and Ulrich (2006). But the fixed point arithmetic is a limiting factor for most applications due to two major problems; the dynamic range of computation and the inflexibility to customise the hardware circuit, once the feature of the application are known. These limitations can be overcome by floating point operations. The hardware based RBF network implemented by Mourad and Habib (1998) is Dynamic Decay Adjustment (DDA) model complying fully with the mathematical formulation of the algorithm. The design relies on the generator and macro-block concept. The hardware implemented for centroid updation given by Wei-Chuan, Jiun-Long and Ming-Syan (2005) is executed on an Altera’s Stratix device with a NiosII 50 MHz CPU and 16 MB of SDRAM and takes approximately two million clock cycles for calculating all centroids in parallel. In our work, for centre updation the hardware takes 6,630 clock cycles with the clock period of 10 ns and occupies 13,642 CLB slices for RBF network and 15,109 CLB slices for the k-means algorithm.

64

2

S.P.J.V. Rani, P. Kanagasabapathy and L. Suganthi

Radial Basis Function Neural Network

The RBFNN is a three layer feed forward neural network. Input layer is made up of source nodes and the second layer is composed by non-linear hidden units fully connected to the input layer. The hidden units provide a set of functions that constitute an arbitrary basis for the input vectors when they are expanded into the hidden unit space. The output layer consists of one or more linear units, whose weights are the unknown coefficients of the RBF expansion. Figure 1 shows the structure of RBFNN. The network performs a non-linear mapping from the input space to the hidden space, followed by a linear mapping from the hidden space to the output space. Figure 1

Structure of RBFNN

Let p denote the dimension of the input space. Then the network represents a map from the p-dimensional input space to the single-dimensional output space, written as s: ƒ p ĺ ƒ 1. The RBFs technique consists of choosing a function Y that has the form N

Y

¦W M (& X  C &) i i

(1)

i

i 1

where {M i (& X  Ci &) | i 1, 2,3,! , N } is a set of N arbitrary functions known as RBFs, and &