Proceedings of the 2013 IEEE/SICE International Symposium on System Integration, Kobe International Conference Center, Kobe, Japan, December 15-17,
MP1-J.2
Design of Control Systems Using Quaternion Neural Network and Its Application to Inverse Kinematics of Robot Manipulator Yunduan Cui* Graduate School of Science and Engineering Doshisha University Kyoto, Japan
[email protected]
Kazuhiko Takahashi** Department of Information Systems Design Doshisha University Kyoto, Japan
Masafumi Hashimoto*** Department of Intelligent Information Engineering Doshisha University Kyoto, Japan
[email protected]
Abstract— In this paper, multi-layer quaternion neural networks that conduct their learning by using quaternion backpropagation algorithm are applied to inverse kinematics control of a 2-link robot manipulator as the first step of utilizing the quaternion neural network for control applications. Three architectures of control system using the quaternion neural network, general learning, specialized learning and on-line specialized learning, are presented and their characteristics are investigated. The experimental results show that in apposite architectures, the learning of quaternion neural network converges with a fewer number of iterations compared with the conventional neural network which has more complex network topology and more parameters in real number being employed.
I. INTRODUCTION The quaternion was invented by the Irish mathematician W. R. Hamilton in order to generalize complex number properties to multi-dimensional space. As an expansion of complex number, quaternion has one real number part and three imaginary parts, which allows a significant decrease of computational complexity in three or four-dimensional problems. Therefore, the quaternion neural network is able to cope with multi-dimensional issues more efficiently by employing quaternion directly. Quaternion neural networks have been widely used in many fields and demonstrated better performances than the real number neural networks in chaotic time series prediction [1], approximate quaternion valued functions [2], 3D wind forecasting [3] [4], image processing [5] [6], color-face recognition [7], vector sensor processing [8], etc. In robotic control systems, the artificial neural networks have been widely used to build control systems as they are capable of learning from complex functions and provide accurate approximations. Although the quaternion neural network seems to be effective for solving problems in this field as many data in robotics are multi-dimensional, the possibility of applying the quaternion neural network to robotic control applications has not been adequately investigated. In this paper, a quaternion neural network using back propagation algorithm is applied in inverse kinematics with three different architectures. Inverse kinematics control is a nonlinear and multi-input multi-output problem in robotic control. Much research of inverse kinematics using the conventional neural networks based on the real number has been conducted in [9] [10] [11]. As less research about inverse
978-1-4799-2625-1/13/$31.00 ©2013 IEEE
[email protected]
kinematics using the quaternion neural network has been conducted, the research of applying the quaternion neural network into this field is needed. The learning result of the quaternion neural network is compared with the real number one’s. A case study of the quaternion neural network’s learning ability in inverse kinematics is carried out. II. Q UATERNION NEURAL NETWORK A. Quaternion algebra Quaternion forms a class of hyper complex number that consists of a real number and three imaginary numbers: i, j and k. A quaternion q is defined by: q = q0 + q1 i + q2 j + q3 k = [q0 , q1 , q2 , q3 ]T
(1)
where qi (i = 0, ..., 3) are the real number parameters. The real number unit is 1 and the three imaginary units are i, j, k, they are orthogonal spatial vectors. The conjugate of a quaternion q ∗ is defined by: q ∗ = q0 − q1 i − q2 j − q3 k
(2)
and the multiplication between one quaternion and its conjugate follows: q ⊗ q ∗ = q02 + q12 + q22 + q32
(3)
Addiction and subtraction of two quaternions, q and r, are defined by: q ± r = [q0 ± r0 , q1 ± r1 , q2 ± r2 , q3 ± r3 ]T
(4)
Multiplication is not commutative and satisfies Hamilton rules. The multiplication between a real number a and a quaternion q is given by: aq = aq0 + aq1 i + aq2 j + aq3 k
(5)
while the multiplication between two quaternions q and r is given by: q ⊗ r = q0 r0 − ⃗q · ⃗r + r0 ⃗q + q0⃗r + ⃗q × ⃗r
(6)
where ⃗q = [q1 , q2 , q3 ]T , ⃗r = [r1 , r2 , r3 ]T , · and × represent scalar and vector product respectively. The norm of quaternion is defined by: √ √ (7) |q| = q ⊗ q ∗ = q02 + q12 + q22 + q32
527
B. Back-propagation algorithm for multi-layer quaternion neural network A multi-layer quaternion neural network has one input layer, one output layer and N hidden layers (N ≥ 0). All signals, weights, biases are quaternions. Back-propagation algorithm[12] is utilized as it is a common algorithm in the learning of multi-layer neural networks. It needs to follow the quaternion algebra to apply this algorithm into the quaternion neural network. In order to describe the back-propagation algorithm for the quaternion neural network with ease, a three-layers quaternion neural network (only one hidden layer in this network) is considered in this section. In the input layer of the quaternion neural network, the ith neuron’s input xi is defined by: xi = x0i + x1i i + x2i j + x3i k
(8)
The quaternion back propagation algorithm introduced in this study is a split quaternion algorithm [1] where the activation function f in the quaternion neural network is split and defined by: f (x0 + x1 i + x2 j + x3 k) = f0 (x0 ) + f1 (x1 )i + f2 (x2 )j + f3 (x3 )k
(9)
Fig. 1.
where µ ∈ ℜ+ is a learning rate, ⊙ denotes the componentby-component product and f ′ is the derivative of activation function. From hidden layer to input layer, the updating formulas are as follows:
In the hidden layer, the jth neuron’s output uj is defined by: ∑ uj = f ( W 1ji ⊗ xi + φ1j ) (10)
∂J ∗ ) = µδ H ∆W 1ji = µ(− ∂W j ⊗ xi 1ji
(17)
∂J ) = µδ H ∆φ1j = µ(− ∂φ j
(18)
i
1j
where W 1ji is the weight between the ith input neuron and jth hidden neuron. φ1j is the jth hidden neuron’s bias. In the output layer, the kth neuron’s output y k is defined by: ∑ y k = f ( W 2kj ⊗ uj + φ2k ) (11)
δH j =[
where W 2jk is the weight between the jth hidden neuron and kth output neuron. φ2k is the kth output neuron’s bias. The learning of quaternion neural network is conducted by back-propagation algorithm so as to minimize the following cost function: ∑ J = 21 ϵk ⊗ ϵ∗k (12) k
where ϵk is the output error defined by: ϵ k = tk − y k
(13)
and tk is the desired output of the kth output neuron. According to the quaternion back-propagation algorithm, form output layer to hidden layer, the updating formulas of quaternion ∆W and ∆φ are: ∂J ∗ ∆W 2kj = µ(− ∂W ) = µδ O k ⊗ uj 2kj
(14)
∂J ∆φ2k = µ(− ∂φ ) = µδ O k
(15)
2k
∑
W 2kj ⊗ uj + φ2k )
j
978-1-4799-2625-1/13/$31.00 ©2013 IEEE
(16)
∑
∑ ′ W 1ji ⊗ xi + φ1j ) W 2kj ⊗ δ O k]⊙f (
k
i
(19)
III. I NVERSE KINEMATICS OF 2- LINK ROBOT MANIPULATOR
j
′ δO k = ϵk ⊙ f (
Overview of communication robot.
In robotics, forward kinematics is the process of calculating the position in space of the end of a linked structure (e.g. a robot arm) by the angles of all the joints. It is easy to calculate and has only one solution. Inverse kinematics is opposite to forward kinematics. It is the task of calculating the joint angles that results in specific positions/orientations of an end-effector of a robot manipulator. In this study, inverse kinematics of a three degree of freedom robot manipulator shown in Fig. 1 is considered as a practical application of the quaternion neural network for controlling robots. In Fig. 1, θ0 , θ1 and θ2 are joint angles, px , py and pz are the position of end effector and L1 , L2 are the lengths of the arm. The relationship between end effector’s positions and joint angles is defined as: px = (L1 cos θ1 + L2 cos(θ1 + θ2 )) cos θ0 py = L1 sin θ1 + L2 sin(θ1 + θ2 ) (20) pz = −(L1 cos θ1 + L2 cos(θ1 + θ2 )) sin θ0 One common method to solve inverse kinematics is to get the inverse functions to calculate angles by them. However, inverse functions are nonlinear, the solution may not exist or there are multiple solutions and even an infinite number of solution. This is the reason why inverse kinematics is a more complex problem than forward kinematics.
528
Fig. 2.
General learning architecture.
Fig. 3.
Specialized learning architecture.
IV. C ONTROL S YSTEM U SING Q UATERNION N EURAL N ETWORKS In this section, the multi-layer quaternion neural network is applied to inverse kinematics control of the robot manipulator. In order to investigate the capability of utilizing the network in robot control, three different architectures of the control system are considered.
Fig. 4.
On-line specialized learning architecture.
A. General learning architecture General learning architecture is a common architecture in control systems using neural networks [13]. Fig. 2 shows a schematic of the general learning architecture, where the plant is the forward kinematics of the 2-link robot manipulator defined by (20). The input of the quaternion neural network is the end-effector’s position p = px i + py j + pz k generated by the joint angle θ = θ0 i + θ1 j + θ2 k through the plant and the output of the quaternion neural network is e = θe0 i + θe1 j + θe2 k. The cost the prediction of joint angle θ function J in the learning of the quaternion neural network ek . is defined by (12) where ϵk = θ k − θ With general learning architecture architecture, the quaternion neural network can work without being modified because the quaternion neural network directly connects to the plant as a controller after the convergence of the quaternion neural network’s learning. However, this architecture has one shortage: the learning samples which are regular in joint angle space became irregular after being translated into Cartesian space coordinate by the plant. It makes the learning samples unsuited for learning.
where Yk is the kth output form the quaternion neural ∂J can be calculated through the cost function network. ∂p n ∂pn ∂Y k and ∂W is known follows (15) - (19). The part ∂Y is k called plant Jacobian. The knowledge of the plant Jacobian is required to apply this architecture. When the plant functions are known, the plane Jacobian can be obtained by the derivative of the plant functions with respect to each input. When the plant is unknown some methods of estimating the plant functions are used. One solution is applying the artificial neural network to get an approximation of the plant. As the architecture in this subsection is off-line, a pretrained neural network (NN2) can be used to estimate the plant, thus the plant Jacobian can be calculated by the partial derivative of the NN2 with respect to its input vector.
B. Specialized learning architecture
C. On-line specialized learning architecture
To avoid the shortage above, the specialized learning architecture has been proposed in [13] [14] [15]. Fig. 3 shows a schematic of the specialized learning architecture. The input of the quaternion neural network is the desired end-effector’s position pd = pdx i + pdy j + pdz k, and the output of the quaternion neural network is the prediction e θ e is put in the plant to get end-effector’s of joint angle θ. position p which is compared with pd . In this architecture the learning sample pd is able to keep regular in Cartesian space coordinate. In the learning of the quaternion neural network, the cost function J is defined by (12) where ϵk = pdk − pk . According to the quaternion back-propagation algorithm, the
In the application of the specialized learning architecture, a prior knowledge of the plant Jacobian or approximation of the Jacobian from a pretrained neural network is required. If the knowledge of the plant is not available and the pretraining of the plant is impossible, then specialized learning will not be feasible. To overcome this problem, an on-line specialized learning architecture as shown in Fig. 4 is carried out. In on-line specialized learning architecture, two quaternion neural networks are used. The quaternion neural network 1 is the controller to approximate inverse kinematics and the quaternion neural network 2 is the identifier to estimate the plant functions. This architecture is able to work without modifying when it is connected to different plants because
978-1-4799-2625-1/13/$31.00 ©2013 IEEE
updating formulas of ∆W is: ∑ ∂J ∆W = µ(− ∂W )=
n=x,y,z
529
∂J ∂pn ∂Y k µ(− ∂p ) n ∂Y k ∂W
(21)
TABLE I
TABLE II
T RAINING RESULT OF GENERAL LEARNING ARCHITECTURE .
T RAINING RESULT OF SPECIALIZED LEARNING ARCHITECTURE ( KNOWN PLANT ).
Parameter Iteration Cost function
1-10-10-1 QNN 564 1000000 0.0318
3-20-20-3 NN 563 165501.2 0.02
3-21-21-3 NN 612 159415.1 0.02
1-5-5-1 QNN Parameter Iteration Cost function
the quaternion neural network 2 can get a approximation of the plant by learning.
1-5-5-1 QNN
B. Specialized learning architecture In the experiment of specialized learning architecture, 486 learning samples were selected in the Cartesian space: the range of px is [0.15, 0.3] with the interval of 0.01875, the range of py is [0, 0.3] with the interval of 0.0375 and the range of pz is [−0.125, 0.125] with the interval of 0.05. The quaternion neural network and the conventional neural networks were trained in which the learning rate was 0.025 and the stop criteria was: J ≤ 0.02 or the number of iteration is more than 5 × 104 . Two types of specialized learning architecture were considered. For the type of known plant, the Jacobian was the derivative of (20) with respect to θ0 , θ1 and θ2 . For the type of unknown plant, the NN2 used in the experiment of the quaternion neural network was a pretrained 1-44-1 quaternion neural network (132 parameters and 0.01 cost function) while the NN2 used in the experiments of the conventional neural network was a pretrained 3-8-8-3 conventional neural network (131 parameters and 0.01 cost function).
3-1010-3 NN 183 26696.2 0.02
TABLE III
A. General learning architecture In the experiment of general learning architecture, 441 learning samples were selected: the range of θ0 is [− π3 , π3 ] with the interval of π9 , the range of θ1 is [0, π2 ] with the π interval of 12 and the range of θ2 is [− 17π 18 , 0] with the π interval of 9 . One quaternion neural network (QNN) and two conventional neural works based on the real number (NN) were trained in this experiment where the learning rate was 0.025 and the stop criteria of the learning was: J ≤ 0.02 or the number of iteration is more than 106 . The learning result is shown in Table I where the iteration and cost function are average values over ten times learning. The result shows that the quaternion neural network is able to provide a approximation of inverse kinematics in the general learning architecture. However, the conventional neural network performed better with less cost function and iteration. Both two types of neural network need a large number of iterations. This poor performance is caused by the shortage of this architecture mentioned in section 4.1: the irregular learning samples in Cartesian space make the learning difficult as the Euclidean distances between learning samples are disorganized.
3-1212-3 NN 243 24514.4 0.02
T RAINING RESULT OF SPECIALIZED LEARNING ARCHITECTURE ( UNKNOWN PLANT ).
V. C OMPUTATIONAL EXPERIMENTS
978-1-4799-2625-1/13/$31.00 ©2013 IEEE
184 8092.1 0.02
3-1515-3 NN 348 19566.8 0.02
Parameter Iteration Cost function Success rate
184 2396.8 0.02 95%
3-1515-3 NN 348 7665.2 0.02 87%
3-1212-3 NN 243 7654.6 0.02 89%
3-1010-3 NN 183 8293.8 0.02 85%
The learning results of these two conditions are shown in Table II and Table III where all the iteration and cost function are average values over 10 successful learnings and the success rate is calculated over 100 times’ learning. The results show that the quaternion neural network can learning faster than the conventional one with fewer parameter being employed in both two types of specialized learning architecture. Compared with 1-5-5-1 quaternion neural network, even 3-15-15-3 conventional neural network can not performance better with a more complex network structure. In this architecture with an unknown plant, the fail learning is a problem for both quaternion neural network and conventional neural network. When the quaternion and conventional neural network learned with the pretrained NN2, sometimes the output value of the neuron will exceed the limition of activation function, then the derivative of activation function will be near zero so that the change of weights becomes very small. In this condition, the cost function decreases very slowly and this phenomenon is defined as locked neural network. On the other hand, wrong learning with pretrained NN2 also cause the fail learning. In this condition, the output of the NN2 is close to desired signal after learning but the output will have a huge error when use the learned neural network to control the real plant. Both two conditions made the success rates in this experiment low. The success rate is highly related to the pretrained NN2 because different pretrained NN2 with same value of cost function make different success rates in experiment. C. On-line specialized learning architecture In the experiment of on-line specialized learning architecture, same learning samples were selected in the Cartesian space. A 1-5-5-1 quaternion neural network and a 3-1515-3 conventional neural work were used as a controller in this experiment where the learning rate was 0.005. As an identifier (NN2), a 1-4-4-1 quaternion neural network
530
TABLE IV T RAINING RESULT OF ON - LINE SPECIALIZED LEARNING ARCHITECTURE .
Parameter Iteration Cost function Success rate
1-5-5-1 QNN 184 15023.6 0.02 23%
3-12-12-3 NN 243 39945.6 0.02 20%
3-15-15-3 NN 348 38359.6 0.02 21%
was used in the experiment of quaternion neural network while a 3-8-8-3 conventional neural network was used for conventional neural network’s experiment. The learning rate of the NN2 was set to 0.05. The stop criteria was: J ≤ 0.02 or the number of iteration is more than 5 × 104 . The learning result is shown in Table IV. In this architecture, the fail learning is very common in experiments as the NN2 is keep changing which makes the lock of neural network easy to happen. In 100 times learning, only 20 and 21 learnings are successful in 3-12-12-3 and 3-15-153 conventional neural network. The average iterations of 10 successful learnings in these two neural networks are 39945.6 and 38359.6. On the other hand, the quaternion neural network works better: the success rate is 23% over 100 times learning. The average iteration of 10 times successful learning is 15023.6 which is fewer than all conventional neural networks in this experiment with fewer number of parameters being employed. To investigate the generalization ability of the quaternion neural network, the open tests of three trajectories were conducted: line, circle and ellipse test. In line test, the test samples are 66 dots following the trajectory: [0.3, 0, −0.1] −→ [0.3, 0, 0] −→ [0.3, 0.3, 0] −→ [0.15, 0.3, 0] −→ [0.15, 0.3, 0.1]. In circle test, all 66 test sample dots formed a circle in which the center is [0.2, 0.15, 0] and radius is 0.1. All the test dots in open test are in the range of learning sample but not included in it. It means that the quaternion neural network must use its learned knowledge to solve a new problem. Fig. 5 and Fig. 6 show the results of one learning in on-line specialized learning architecture with two different test trajectories. However, the trajectories in line and circle test are in the range of learning sample where px ∈ [0.15, 0.3], py ∈ [0, 0.3] and pz ∈ [−0.125, 0.125] which means that all the test signals are in the knowledge of a trained quaternion neural network. A ellipse test was conducted in order to investigate the generalization ability of the signals that are out of knowledge. In this test, all 66 test sample dots formed a ellipse: px = 0.2 py = 0.15 + 0.2 cos ψ (22) pz = 0.12 sin ψ where some dots in y axis were out of the learning sample’s range. The ellipse test’s result of one learning in on-line specialized learning architecture is shown in Fig. 7. The results of three trajectories show that the learned quaternion
978-1-4799-2625-1/13/$31.00 ©2013 IEEE
Fig. 5. Open test result of quaternion neural network in on-line specialized learning architecture with line ∑ trajectory, where the sums ∑of error are: 1 ∑ e∥ = 0.212, 12 e∥ = ∥pd − p ∥pd − p∥ = 0.198 and 12 ∥p − p 2 0.102.
Fig. 6. Open test result of quaternion neural network in on-line specialized learning architecture with circle ∑ trajectory, where the sums ∑of error are: 1 ∑ e∥ = 0.190, 12 e∥ = ∥pd − p ∥pd − p∥ = 0.146 and 12 ∥p − p 2 0.085.
neural network has a good generalization ability and be able to provide a good solution to a new problem in/out of its knowledge as the sum of error between the desired responds and output of the quaternion neural network 2 is acceptable. VI. C ONCLUSIONS In this paper, the experiments of inverse kinematics using the quaternion neural network were conducted to investigate the feasibility and characteristics of the quaternion neural network in the application of controlling robotic systems. The result showed that the quaternion neural network provides a better performance (i.e., fewer number of iterations in learning) than the conventional neural network with fewer number of parameters in specialized learning architecture and on-line specialized learning architecture. In inverse kinematics, the quaternion neural network showed its preponderance compared with the conventional one. In the experiments of specialized learning architecture with unknown plants and on-line specialized learning architecture for both quaternion and conventional neural networks, the failing learning is a problem which makes the success
531
Fig. 7. Open test result of quaternion neural network in on-line specialized learning architecture with ellipse ∑ trajectory, where the sums ∑of error are: 1 ∑ e∥ = 0.259, 21 e∥ = ∥pd − p ∥pd − p∥ = 0.156 and 12 ∥p − p 2 0.169.
rate of learning low. In specialized learning architecture with unknown plants, the success rate was highly related to the pretrained neural network which estimates the plant output. In on-line specialized learning architecture, the quaternion and conventional neural network 2 which keep updating made the success rate lower. It needs future research to investigate why failing learning occurs and how to improve the success rate when using two quaternion neural networks to work together in these architectures. On the other hand, the quaternion back propagation algorithm used in this paper is a split algorithm in which the activation function in the quaternion neural network is divided into four separated functions. It prohibits a rigorous treatment of the cross-information across the data channels and does not exploit the full potential of the processing in the quaternion domain [4]. In [17], an improved quaternion back propagation algorithm with fully quaternion function based on local analyticity [18] was discussed. A further research of the quaternion neural network using this new algorithm in inverse kinematics is needed.
[7] D. Lijun and H. Feng, “Quaternion K-L Transform and Biomimetic Pattern Recognition Approaches for Color-face Recognition”, in Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, 2009, pp.165–169. [8] W. Mingxuan, C.C. Took and D. P. Mandic, “A Class of Fast Quaternion Valued Variable Stepsize Stochastic Gradient Learning Algorithms for Vector Sensor Processes”, in Proceedings of the 2011 International Joint Conference on Neural Networks, 2011, pp.2783– 2786. [9] Y. Kuroe, Y. Nakai and T. Mori, “A Neural Network Learning of Nonlinear Mappings with Considering Their Smoothness and Its Application to Inverse kinematics”, in Proceedings of the 20th International Conference on Industrial Electronics, Control and Instrumentation, 1994, pp.1381–1386. [10] E. Oyama, T. Maeda, J. Q. Gan, E. M. Rosales, K. F. MacDorman, S. Tachi and A. Agah, “Inverse kinematics Learning for Robotic Arms with Fewer Degrees of Freedom by Modular Neural Network Systems”, in Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, pp.1791–1798. [11] E. Sariyildiz, K. Ucak, G. Oke, H. Temeltas and K. Ohnishi, “Support Vector Regression based Inverse kinematics Modeling for a 7-DOF Redundant Robot Arm”, in Proceedings of the 2012 International Symposium on Innovations in Intelligent Systems and Applications, 2012, pp.1–5. [12] S. Haykin, Neural Networks A Comprehensive Foundation (Second Edition), 2001, pp. 161–171. [13] D. Psaltis, A. Siders and A. Ymamura, A Multilayered Neural Network Controller, Control Systems Magazine, IEEE, Vol. 8, 1988, pp. 17–21. [14] M. I. Jordan, “Generic Constrains on Uderspecified Target Trajectories”, in Proceedings of International Joint Conference on Neural Networks, 1989, pp. 217-225. [15] V. Kecman, “Learning in an Adaptive Backthrough Control Structure”, in Proceedings of 1997 3rd International Conference Algorithms and Architectures for Parallel Processing, 1997, pp. 611–624. [16] G. W. Ng, P. A. Cook, On-line Adaptive Control of Non-linear Plants using Neural Networks with Application to Liquid Level Control, International Journal of Adaptive Control and Signal Processing, 1998, Vol. 12, pp. 13–28. [17] T. Isokawa, H. Hishimura, and N. Matsui, Quaternionic Multilayer Perceptron with Local Analyticity, Information, Vol. 3, 2012, pp. 756– 770. [18] S. De Leo, P. P. Rotelli, Quaternionic Analyticity, Applied Mathematics Letters, Vol. 16, 2003, pp. 1077–1081.
R EFERENCES [1] P. Arena, R. Caponetto, L. Fortuna, G. Muscato, and M. G. Xibilia, Quaternionic Multilayer Perceptrons for Chaotic Time Series Prediction, IEICE Trans. Fundamentals, Vol. E79-A, No. 10, 1996, pp. 1682– 1688. [2] P. Arena, L. Fortuna, G. Muscato and M. Xibilia, Multilayer Perceptrons to Approximate Quaternion Valued Functions’, Neural Networks, No. 10, 1997, pp. 335–342. [3] C. Jahanchahi, C. C Took and D. P. Mandic, On HR Calculus, “Quaternion Valued Stochastic Gradient, and Adaptive Three Dimensional Wind Forecasting”, in Proceedings of the 2010 International Joint Conference on Neural Networks, 2010, pp.1–5. [4] B. C. Ujang, C. C. Took, and D. P. Mandic, Quaternion-Valued Nonlinear Adaptive Filtering, IEEE Transactions on Neural Networks, Vol. 22, No. 8, 2011, pp. 1193–1206. [5] H.Kusamichi, T. Isokawa, N. Matsui, Y. Ogawa and K. Maeda, “A New Scheme Color Night Vision by Quaternion Neural Network”, in Proceedings of the 2nd International Conference on Autonomous Robots and Agents, 2004, pp. 101–106. [6] L. Lincong, F. Hao and D. Lijun, “Color Image Compression Based on Quaternion Neural Network Principal Component Analysis”, in Proceedings of the 2010 International Conference on Multimedia Technology, 2010, pp.1–4.
978-1-4799-2625-1/13/$31.00 ©2013 IEEE
532