Proof-of-Concept of a Robotic Apple Harvester - IEEE Xplore

6 downloads 149570 Views 5MB Size Report
Integrated testing of the robotic harvesting system has been completed in a laboratory environment with a replica apple tree for proof-of- concept demonstration.
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Daejeon Convention Center October 9-14, 2016, Daejeon, Korea

Proof-of-Concept of a Robotic Apple Harvester*

Joseph R. Davidson, Abhisesh Silwal, Cameron J. Hohimer, Manoj Karkee, Changki Mo, Qin Zhang attempts to transfer robotics technology to the orchard, there are no known commercial implementations of robotic harvesting systems for fresh market tree fruit [5]. The highly unstructured orchard environment has proven very challenging for existing technology. Some of the numerous environmental challenges include dynamic outdoor conditions, complex tree structures, delicate products, and variable fruit shape and size. Commercial implementation of robotic apple harvesting systems requires performance improvements in both speed and robustness.

Abstract— There are no mechanical harvesters for the fresh market apple industry commercially available. The absence of automated harvesting technology is a critical problem because of rising production costs and increasing uncertainty about future labor availability. This paper presents the preliminary design of a robotic apple harvester. The approach adopted was to develop a low-cost, ‘undersensed’ system for modern orchard systems with fruiting wall architectures. A machine vision system fuses Circular Hough Transform and blob analysis to detect clustered and occluded fruit. The design includes a custom, six degree of freedom manipulator with an underactuated, passively compliant end-effector. After fruit localization, the system makes a linear approach to the apple and replicates the human picking process. Integrated testing of the robotic harvesting system has been completed in a laboratory environment with a replica apple tree for proof-ofconcept demonstration. Experimental results show that the system picked 95 of the 100 fruit attempted with average localization and picking times of 1.2 and 6.8 seconds, respectively, per fruit. Additional work planned in preparation for field evaluation in a commercial orchard is also described. 

This paper presents the preliminary design of a robotic apple harvesting system. The proposed system has been developed for modern apple orchard systems with fruiting wall architectures. This working environment, which is briefly described in Section II, reduces fruit detection challenges. A system robust to perception error is proposed in order to reduce sensing requirements and improve manipulation speed. The implemented fruit picking method also replicates human picking patterns in order to minimize fruit bruising. Section III presents an overview of the harvester’s integrated design. Replicated apple picking in a laboratory environment was conducted to evaluate the proofof-concept. Results of this integrated lab testing are presented in Section IV. Additional work planned in preparation for field trials in a commercial orchard is also described.

I. INTRODUCTION

Despite extensive mechanization of agriculture in the Twentieth Century, production of high-value specialty crops such as fresh market apples still depends on manual labor. Every fresh market apple is picked by the human hand. The most time and labor-intensive task in fresh market tree fruit production is harvesting. In Washington State alone, the apple and pear harvest requires seasonal employment of approximately 30,000 additional workers [1, 2]. Like many agricultural sectors around the world, the U.S. Pacific Northwest tree fruit industry faces economic pressure from rising labor costs and increasing uncertainty about the future availability of farm labor [3, 4].

II. ROBOTIC APPLE HARVESTER DESIGN

The fundamental objective of an apple harvesting robot is to efficiently detach a fruit from the tree without damaging it or the plant. In order to achieve this goal, the system must meet the following basic functional requirements:

1. Detection of the fruit in the scene, identification of its properties such as color and geometry, and localization of the fruit in 3-dimensional space

Significant research has been devoted to selective tree fruit harvesting with robotics technology [5]. Some of the earliest fruit picking robots were developed in the 1980s [6, 7]. Over the past decade, robotic apple harvesters have been proposed by Baeten et al. [8], Bulanon and Kataoka [9], and Zhao et al. [10]. Reported harvest efficiencies for these systems were approximately 80% with picking times varying from eight to 15 sec per fruit. However, despite numerous

2. Approach to the fruit

3. Detachment of the fruit from the tree

4. Guiding the harvested fruit to the storage container

The final working environment for the robotic harvesting system is a highly maintained commercial apple orchard in Washington State employing the V-trellis fruiting wall architecture (Fig. 1a). This is a modern orchard system with tree branches laterally trained along trellis wires and where orchard parameters such as tree and branch spacing are relatively structured. The resulting two-dimensional, planar canopy enhances visibility and accessibility of the fruit compared to conventional, 3D tree canopies. Also, as shown in the highlighted region of Fig. 1b, thinning and pruning practices have created a workspace between the trellis wires that is relatively obstacle free.

*Research supported in part by the National Robotics Initiative, USDA National Institute of Food and Agriculture (NIFA), Project # 1000339, and by USDA NIFA Hatch project # 1005756 and project # 1001246 received from the Washington State University Agricultural Research Center. J.R. Davidson, C.J. Hohimer, and C. Mo are with the School of Mechanical and Materials Engineering, Washington State University, Richland, WA 99354 USA (email: [email protected], [email protected], [email protected]). A. Silwal, M. Karkee, and Q. Zhang are with the Center for Precision and Automated Agricultural Systems, Biological Systems Engineering Department, Washington State University, Prosser, WA 99350 USA (email: [email protected], [email protected], [email protected]) 978-1-5090-3762-9/16/$31.00 ©2016 IEEE

634

(a)

(b)

Figure 1. (a) Well maintained V-trellis orchard system. (b) The region between the trellis wires is relatively free of obstacles.

Considering the advantages of this orchard environment for robotic harvesting, the approach adopted for the preliminary design was to   

of the integrated end-effector and manipulator.

In order to minimize fruit bruising and improve harvest efficiencies, the end-effector design is based on the human hand’s manipulation patterns during apple picking. In brief, the standard grasp used by professional pickers is a power grasp with the index finger placed on the stem/abscission joint. To separate the fruit from the branch, the picker applies a combined pulling/pendulum motion. Bending of the stem joint produces shear forces and reduces the normal contact forces required to detach the fruit [11]. Stem retention is desired for increased market value. It is also important that the spur is not detached from the tree because doing so removes a fruiting position for the following year’s crop.

Decrease harvesting cycle time by visually detecting fruit only and avoiding computationally expensive obstacle detection and motion planning calculations

Reduce sensing requirements and system cost with ‘look’ and ‘move’ apple picking using an endeffector mechanically robust to perception error Minimize fruit damage by replicating the human picking process

Because of the adopted open-look picking technique and use of minimal sensing, the proposed system is described as ‘undersensed’. The harvester’s design is presented in the following sections.

The end-effector has three tendon-driven fingers that produce a spherical power grasp. Underactuation provides a shape adaptive grasp of apples with variable geometries. Each finger has two links and two flexure joints. Passively compliant joints have been incorporated to increase grasping robustness to position error [12] and reduce the likelihood of damage during unintended collisions. The end-effector also includes an underactuated gripper that applies pressure against the apple’s stem while the fruit is grasped. To minimize manipulator payload all end-effector components were fabricated with additive manufacturing. More detailed information about the design and fabrication of both the manipulator and end-effector are provided in the recent conference paper by Davidson and Mo [13].

A. Manipulator and End-Effector A custom manipulator and end-effector were fabricated for the robotic apple harvester. The manipulator is a six degree-of-freedom (DOF) serial link design incorporating the modular Dynamixel Pro actuators (Robotic Inc., Irvine, CA). All joints are revolute. Structural frames were fabricated from aluminum sheet metal using a water jet cutter and sheet metal bender. Fig. 2 shows the CAD model

B. Vision System Visual sensing is an essential and primary task for an autonomous robotic harvesting system. However, vision is often considered a bottleneck for developing commercially applicable robotic harvesting systems. Variable lighting conditions, fruit clustering, and occlusion are some of the significant challenges that limit the performance of the machine vision system in an orchard environment. Prior to harvesting, the robotic system must identify and accurately locate the fruit. The machine vision algorithm developed by Silwal et al. [14] has been adopted for apple identification. This algorithm uses Circular Hough transformation (CHT) to identify clearly visible fruit as well as individual apples in clusters. CHT was selected because of an apple’s shape, which is generally circular, and its robustness in detecting

Figure 2. Six DOF manipulator and underactuated end-effector fabricated for the robotic apple harvester. 635

(a)

(b)

(c)

Figure 3. Inverse mapping of 3D pixels on 2D color image. (a) is the RGB image, (b) is the 3D depth image, and (c) shows fusion of the 2D & 3D images.

varying orientations were then captured by both the 2D and 3D cameras. To estimate all intrinsic parameters, the intensity image from the 3D camera and the color image from the 2D camera were used to extract the grid corners in the checkerboard using the calibration toolbox. These parameters were used to remove any distortions formed in the 2D and 3D images by the lens. As they have different fields of view (40o vertical x 40o horizontal for the 3D and 33.4o vertical x 43.6o horizontal for the CCD) and different camera centers, a transformation matrix (rotation and translation vectors as extrinsic parameters combined) was required to bring them to a similar plane. This transformation was conducted using the stereo calibration toolbox in Matlab [16] where the color camera was used as the reference left hand camera. Once the intrinsic and extrinsic parameters were known, the 3D image acquired from the 3D camera was inverse mapped to the color image by

Figure 4. Output of machine vision system.

circles in the presence of noise and occlusion. However, apples partially visible in images have random and fragmented shapes that are not detectable with CHT. These apple fragments were identified using blob analysis. To minimize multiple detection, a clustering algorithm was used to merge fragmented parts of the same apple. Finally, CHT and blob analysis results were fused in order to improve the accuracy of apple identification. The algorithm has been previously tested in an orchard environment with 90% fruit identification accuracy. A sample output image of the machine vision algorithm is displayed in Fig. 3 where apples identified by CHT are marked with green circles and those identified by BA are marked with yellow rectangles. The global camera system consists of a single CCD (Charged Couple Device) color camera (Prosilica GC1290C, Allied Vision Technologies, Exton, PA) mounted on top of a time-of-flight (ToF) based 3D camera (Camcube 3.0, PMD Technologies, Sigen, Germany) with a similar field of view. Spatial resolutions of the color and 3D cameras are 1280 x 960 pixels and 200 x 200 pixels, respectively. The goal of this configuration is to acquire color images with the CCD camera to identify the apples and then obtain their 3D coordinates from the 3D camera to localize their position in space. Unlike other fruit harvesting vision systems that attach a camera to the manipulator or end-effector [10, 15], the use of a single set of global cameras does not require computationally expensive visual-servoing techniques that may constrain manipulation speeds. The machine vision system is used once at the beginning of each harvest cycle to identify and localize the apples.

URGB = fx, RGB * X’/Z’ + Cx, RGB VRGB = fy, RGB * Y’/Z’ + Cy, RGB

(1)

where URGB and VRGB are coordinates of the 3D pixels in the RGB image, X’, Y’ and Z’ are 3D coordinates with respect to the RGB camera, fx, RGB and fy, RGB are focal lengths in the x and y axis for the color camera, and Cx, RGB and Cy, RGB are the camera centers along the x and y axis, respectively. Similarly, low resolution 3D image and high resolution 2D color image fusion was also used by Van den Bergh and Van Gool [17] for 3D hand gesture interaction. Fig. 4c shows an overlaid image after mapping 3D coordinates onto a color image. 3D coordinates mapped inside these green circles were then averaged to calculate the mean x, y and z coordinates of the respective apples. Only those 3D coordinates with distances close to the apples were mapped onto the color image. This step was necessary to ensure the accuracy of the averaging function that calculated the means of the coordinates. On average, the 3D coordinates obtained from the ToF camera registered measurement error within ±6 mm at a distance of 1.5 m. This distance is representative of the distance between the camera and tree canopy expected during field studies.

To obtain the 3D coordinates of every apple identified in the color image (Fig. 4a), the 3D image consisting of the 3D coordinates at each pixel (Fig. 4b) are mapped to the appropriate location in the 2D color image (Fig. 4c). Such coregistration requires multiple parameters often described as intrinsic and extrinsic camera parameters. These intrinsic parameters include focal length, principle point, and distortion coefficient (skew, radial and tangential). The approximate estimation of these intrinsic parameters was achieved using a camera calibration toolbox in Matlab [16] and a checker board (9 x 7 with each square measuring 0.05 mm x 0.05 mm). The checker board was placed one meter in front of the camera. Twenty images of the checkerboard at

III. INTEGRATED HARVESTING SEQUENCE

Autonomous apple harvesting requires integration of visual sensing and mechanical manipulation. The logical sequence of picking activities adopted for the preliminary design are shown in the flow chart in Fig. 5. Key steps in the picking process, including apple prioritization, path planning, 636

z

Hide arm outside field of view Acquire image & localize fruit

Call IK solver for all fruit

k

Send arm to home position

a

Find nearest apple

No

a

c

b

y

Figure 6. The azimuth angle partially defines the orientation of the endeffector’s normal vector at the approach position.

Yes

B. Path Planning Because of the planar apple distribution with minimal branch obstructions (Fig. 1b), the workspace is assumed to be obstacle free. The world reference frame O (a right hand coordinate frame) with unit vectors i, j, k is located at the manipulator’s base with the x axis directed toward the plane of the tree and the z axis vertical. Using pure translation the fruit’s position vector is transformed from the camera’s reference frame to frame O. Let p = ai + bj + ck be the vector of coordinates of the fruit’s center with respect to frame O and Φ = [φ ψ α]T be the roll-pitch-yaw Euler angles describing rotations around the x-, y-, z-axes, respectively. During a single picking sequence, the manipulator guides the end-effector to an approach position pA 10 cm away from the fruit using point-to-point motion with trapezoidal velocity profiles. The 3 x 3 rotation matrix RE relating orientation of the end-effector frame E to frame O at the approach position is RE = RZ(α) where α = tan-1(b/a) (Fig. 6). The end-effector’s motion from the approach position to the fruit follows a linear path defined in Cartesian space. As the pitch ψ is zero, the approach is horizontal to the surface. Using differential kinematics, the end-effector approaches the fruit along its normal vector n with a translational velocity vT = RE*[0.2 0 0]T (m/sec). There is no angular endeffector velocity during the approach.

Grasp fruit Pick & drop fruit

Any unpicked apples remaining?

i

j

x

Apple in reachable workspace? Approach fruit along azimuth vector

Yes

p = ai+bj+ck

No

Figure 5. Flow chart describing the logical sequence of activities during apple harvesting.

and fruit detachment, are discussed in more detail in the following sections.

A. Apple Prioritization Apple prioritization is considered the travelling salesman problem (TSP). The TSP is a well-known optimization problem in the class of Non-Deterministic Polynomial-Time hard (NP-hard) problems [18]. In summary, the TSP aims to find a route from a known location, visits a pre-described set of locations (visiting each only once), and returns to the original location in such a way that the total travelling distance is minimum [18]. For the preliminary design the Nearest Neighbor (NN) algorithm has been selected for the TSP. Before capturing an image, the manipulator is moved out of the camera’s field of view. Then, fruit identification and startup positioning of the manipulator are executed in parallel. Prior to harvesting, the robotic manipulator waits in its home position for prioritization. This location is considered the starting location in the TSP problem. All distances to and between apples are calculated using the 3D Euclidean distance formula with respect to the home location. A lookup table is created that consists of all distances that the TSP uses to find the optimal solution. Once the lookup table is formulated, the algorithm prioritizes the nearest apple and the manipulator is directed to pick that particular fruit.

C. Fruit Detachment To grasp the fruit, the end-effector’s actuators are set in torque mode with a limiting current and driven to their stall point. After 1.5 sec, assuming that quasi-static equilibrium has been reached and the apple is securely grasped, the manipulator simultaneously rotates the apple 30o around the end-effector normal n and removes it horizontally to a storage position 13 cm away from the tree canopy. So, at the storage position RE = RZ(α)RX(φ). The end-effector is then opened and the fruit released. The inverse kinematics solver developed for the system uses a numerical algorithm [19] to obtain joint solutions for three positions, the approach position, the fruit position, and the storage position. If any of the positions requires an infeasible joint solution (e.g. violates a joint limit or outside the manipulator’s workspace), the respective fruit is excluded from the harvesting cycle. After the fruit is 637

removed, the lookup table is updated to remove the harvested apple from the travel path. This process repeats itself until all prioritized apples are harvested. As the number of apples was limited to a few during experimental testing in the laboratory (Section IV), the NN algorithm always provided a globally optimum solution at negligible computational cost.

Table 1. Results of Integrated System Testing in Laboratory Setting Metric Mean (sec) Standard Deviation Localization time 1.18 0.25 Path planning time 0.06 0.01 Picking time 6.82 0.28

A total of twenty four picking cycles were completed. During each picking cycle the number of fruit present in the tree varied from two to five. The number and location of the fruit during each cycle was randomized. Of the 109 apples localized, 100 were present in the system’s reachable workspace. Of these 100 fruit the system attempted to pick, 95% were successfully detached. Approximately 56 of the 100 fruit had their stems gripped during the picking sequence. A picture of the end-effector approaching an apple is shown in Fig. 7b. The results for the mean localization time, motion planning time, and picking time are shown in Table 1. The time required to detect and localize a single fruit was approximately 1.18 sec. Mean picking time was just under 7 sec per apple.

IV. LABORATORY PICKING EXPERIMENT

A. Experimental Set-Up To evaluate the performance of the preliminary design, integrated system testing was conducted with a laboratory mock-up. The experimental set-up (Fig. 7a) included a replica apple tree with foliage, branches, and suspended fruit. The replica tree was created with a planar design so as to represent the workspace shown in Fig. 1b. A black curtain was suspended behind the tree. Note, this curtain is representative of the uniform background present inside the over-the-row structure designed and implemented during an earlier study by Silwal et al. [14]. The over-the-row structure provided a controlled lighting environment, prevented fruit motion from wind during image capture, and protected instruments from precipitation. Future field evaluations of the robotic harvesting system will use the over-the-row platform. During lab studies, the vision system was mounted 0.5 m behind and 0.5 m above the base of the manipulator. The distance from the tree to the vision system varied from approximately 0.9 to 1.1 m.

C. Discussion Open loop, look-and-move fruit picking produced acceptable harvesting efficiencies. The system was able to pick most fruit using visual detection once at the beginning of a harvest cycle. Detachment of a fruit from the tree did cause some vibration of the remaining apples, but the endeffector was still able to complete successful grasps. However, the percentage of fruit stems successfully gripped (56%) during laboratory testing represents an opportunity for improvement. Not applying stem pressure increases the likelihood of a stem pull or spur detachment. In all cases where the stem was missed, the approach azimuth was not coincident with the center of the fruit. The most likely cause of this is insufficient calibration between the machine vision system and manipulator. Another indicator of calibration problems is a significant discrepancy in stem gripping correlated to the fruit’s y coordinate. For those fruit located in the negative quadrant, stem gripping was completed nearly 100% of the time. However, fruit with positive ycoordinates rarely had a successful stem grip. Improved system calibration is an issue that will be addressed in the future.

B. Results The goal of lab testing was to measure harvesting effectiveness and speed in order to evaluate the proof-ofconcept. To gauge performance, the following metrics were defined:   

Localization time (sec) – the time required to identify and localize a single fruit

Path planning time (sec) – the average computation time required to complete inverse kinematics and path planning calculations for a single fruit position Picking time (sec) – the time required for the manipulator and end-effector to approach, detach, and store a single fruit

There were several limitations from the laboratory set-up that restricted performance assessments. First, because of the

(a)

(b)

Figure 7. (a) Experimental set-up used for integrated system testing. (b) End-effector approaching a replica apple during lab testing. 638

mobility within the orchard the robotic system will be mounted on an electric utility vehicle driven by a human operator. A potential configuration being considered for field trials is shown in Fig. 8. REFERENCES

[1] S. Galinato and R. K. Gallardo, "2010 Estimated Cost of Producing [2] [3] [4] Figure 8. Harvesting system mounted on the back of an electric utility vehicle (John Deere, Moline, IL, USA).

[5]

relatively few apples present during each picking cycle, it was difficult to explore the tradeoff between fruit position error tolerance and interference from adjacent fruit and/or stems. While passive compliance of the end-effector may help ensure a successful grasp for imprecise positions, the existence of this position error may still lead to picking failures if an obstruction blocks an end-effector component during grasping. Second, in the laboratory set-up each stem had a vertical orientation. The stem orientation of apples on a tree is variable and not always vertical. Studies to determine whether the vision system described in this paper can determine stem orientation are ongoing.

[6] [7] [8] [9] [10]

The cycle time of eight sec per fruit is significantly slower than human picking speed. A professional farm laborer picks an apple in approximately two sec. The fact that few apples were picked each cycle in the laboratory, at least compared to realistic field conditions, could possibly increase the picking time. For example, picking time might possibly decrease if there were more apples per scene compared to the three to four used in these experiments because the portion of the time taken to move from home to the first apple and from the last apple to home would have less effect on the overall average. Additionally, it should be noted that during laboratory testing the apple was dropped at the storage point. The initial focus during evaluation of the preliminary design was fruit detection and manipulation performance. Total cycle time should also include the time required for fruit storage. Future work includes the development of a secondary robotic system that will follow the end-effector, catch the apple at the end-effector’s release position, and store the fruit in the storage container.

[11] [12] [13] [14] [15] [16]

V. CONCLUSION

[17]

This paper presented the preliminary design for a lowcost, undersensed robotic system designed to harvest fresh market apples in modern orchard systems with planar architectures. Results from picking experiments using a laboratory mock-up provide confidence in the approach and validate the proof-of-concept. Several design modifications are planned in preparation for field studies in a commercial orchard. The harvesting system’s workspace will be expanded to include larger portions of the tree canopy by mounting the manipulator on a prismatic base. Also, for

[18]

[19]

639

Pears in North Central Washington," Washington State University Fact Sheet (FS031E), 2011. R. K. Gallardo, M. Taylor and H. Hinman, "2009 Cost Estimates of Establishing and Producing Gala Apples in Washington," Washington State University Fact Sheet (FS005E), 2010. S. A. Fennimore and D. J. Doohan, "The Challenges of Specialty Crop Weed Control," Weed Technology, pp. 364-372, 2008. L. Calvin and P. Martin, "The U.S. Produce Industry and Labor: Facing the Future in a Global Economy," Economic Research Service, United States Department of Agriculture (USDA), 2010. C. Wouter Bac, E. J. van Henten, J. Hemming and Y. Edan, "Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead," Journal of Field Robotics, vol. 31, no. 6, pp. 888911, 2014 A. Grand D'Esnon, "Robotic Harvesting of Apples," in Proceedings of Agri-Mation 1. ASAE Paper 1-85, St. Joseph, MI, 1985. R. C. Harrell and P. Levi, "Vision Controlled Robots for Automatic Harvesting of Citrus," in International Conference on Agricultural Engineering, Paper No. 88.426, Paris, France, 1988. J. Baeten, K. Donne, S. Boedrij, W. Beckers and E. Claesen, "Autonomous Fruit Picking Machine: A Robotic Apple Harvester," Field and Service Robotics, vol. 42, pp. 531-539, 2008. D. M. Bulanon and T. Kataoka, "Fruit Detection System and an End Effector for Robotic Harvesting of Fuji Apples," Agricultural Engineering International: CIGR Journal, vol. 12, no. 1, pp. 203-210, 2010. D. Zhao, J. Lu, W. Ji, Y. Zhang and Y. Chen, "Design and Control of an Apple Harvesting Robot," Biosystems Engineering, vol. 110, pp. 112-122, 2011. J. Tong, Q. Zhang, M. Karkee, H. Jiang and J. Zhou, "Understanding the Dynamics of Hand Picking Patterns of Fresh Market Apples," in ASABE and CSBE/SCGAB Annual International Meeting, Montreal, Canada, 2014. A. M. Dollar and R. D. Howe, "The Highly Adaptive SDM Hand: Design and Performance Evaluation," The International Journal of Robotics Research, vol. 29, no. 5, pp. 585-597, April 2010. J. R. Davidson and C. Mo, "Mechanical Design and Initial Performance Testing of an Apple-Picking End-Effector," in ASME International Mechanical Engineering Congress and Exposition, Houston, TX, USA. , November 13-19, 2015. A. Silwal, A. Gongal and M. Karkee, "Identification of Red Apples in Field Environment with Over-the-Row Machine Vision System," Agricultural Engineering International: Agric Eng Intl (CIGR Journal), vol. 16, no. 4, pp. 66-75, 2014. D. Font, T. Palleja, M. Tresanchez, D. Runcan, J. Moreno, D. Martinez, M. Teixido and J. Palacin, "A Proposal for Automatic Fruit Harvesting by Combining a Low Cost Stereovision Camera and a Robotic Arm," Sensors, vol. 14, pp. 11557-11579, 2014. J. Y. Bouget, "Camera Calibration Toolbox for Matlab," 2013. [Online].Available: http://www.vision.caltech.edu/bouguetj/calib_doc. M. Van den Bergh and L. Van Gool, "Combining RGB and ToF Cameras for Real-time 3D Hand Gesture Interaction," in 2011 IEEE Workshop on Applications of Computer Vision (WACV), Kona, HI, USA, 2011. D. Nagamalai, A. Kumar and A. Annamalai, Advances in Computational Science, Engineering, and Information Technology Volume 1, Konya, Turkey: Proceedings of the Third International Conference on Computational Science, Engineering, and Information Technology, 2013. L. T. Wang and C. C. Chen, "A Combined Optimization Method for Solving the Inverse Kinematics Problem of Mechanical Manipulators," IEEE Transactions on Robotics and Automation, vol. 7, no. 4, pp. 489-499, August 1991.