A Feature Learning Based Approach for Automated Fruit Yield Estimation Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh
Abstract This paper demonstrates a generalised multi-scale feature learning approach to multi-class segmentation, applied to the estimation of fruit yield on treecrops. The learning approach makes the algorithm flexible and adaptable to different classification problems, and hence applicable to a wide variety of tree-crop applications. Extensive experiments were performed on a dataset consisting of 8000 colour images collected in an apple orchard. This paper shows that the algorithm was able to segment apples with different sizes and colours in an outdoor environment with natural lighting conditions, with a single model obtained from images captured using a monocular colour camera. The segmentation results are applied to the problem of fruit counting and the results are compared against manual counting. The results show a squared correlation coefficient of R2 = 0.81.
1 Introduction A continuous growing population has generated an unprecedented demand for agriculture production. Several problems with traditional farming will need to be addressed in order to achieve the required increase in production. First of all, there is a shortage in labour in rural areas. Migration and urbanization have left rural areas with a reduced and aging population and has also reduced the available farmland for Calvin Hung Australian Centre for Field Robotics, The University of Sydney, e-mail:
[email protected] James Underwood Australian Centre for Field
[email protected]
Robotics,
The
University
of
Sydney,
e-mail:
Juan Nieto Australian Centre for Field Robotics, The University of Sydney, e-mail:
[email protected] Salah Sukkarieh Australian Centre for Field Robotics, The University of Sydney, e-mail:
[email protected]
1
2
Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh
production. Secondly, the impacts of current practices to climate change provoke a necessity for innovation in order to make agriculture a sustainable practice. Reducing emissions, increasing efficiency in the use of resources such as water, reducing the use of chemical products are some of the actions that will need to be addressed to reduce the environmental footprint of farming. Robotics and automation are expected to have a significant impact on farms of the future by increasing efficiency in production and the use of resources and reducing labour costs. Information gathering systems are already being used in many farms. Geographic Information Systems (GIS) based on remote sensing data have been shown to provide good methods to solve spatial problems and support decision making [1]. The main limitation of GIS is that they provide coarse spatial and temporal information. Manual sampling suffers from the same two shortfalls, in addition to an increase in labour cost. On the other hand, field data from ground mobile sensor platforms can increase spatial and temporal resolution with improved operational and cost benefits. The operational benefit of the autonomous system is to provide accurate yield estimation that grants better decision making such as fertiliser application across a farm, prediction for production purposes later in the supply chain. In addition the research into image processing can provide understanding of individual health and nutritional values of the fruit. This paper demonstrates a feature learning based approach for image based fruit segmentation. We present results for apple segmentation using a camera mounted on an unmanned ground vehicle shown in Fig.1. Outdoor fruit classification using vision systems is a challenging task. An autonomous segmentation approach will have to deal with illumination changes, occlusion, variety of object sizes and in our case variety of colours. The algorithm presented here performs pixel classification using feature learning to automatically select the most important attributes of the target (fruit). We show that the approach can handle different illumination conditions and different fruit size and colour. In particular this paper presents the following contributions: • an extension for apple tree segmentation of the algorithm presented in [2] for almonds; • extensive experiments using data collected in an apple orchard with a robotic platform. The algorithm detects both, red and green apples with a common model; • a pipeline for fruit counting and validation against manual fruit counting performed by the farmer.
2 Related Work There has been a relatively large amount of work presented in fruit segmentation using vision. The literature covers a variety of fruits such as grapes [3] [4], mangoes
A Feature Learning Based Approach for Automated Fruit Yield Estimation
(a) Sensor Configuration
3
(b) Scanning a Row
Fig. 1 The perception research ground vehicle Shrimp, operating at the apple orchard.
[5], oranges [6] and apples [7, 8]. A common trend in orchard tree classification is the use of manually designed features, such as intensity or colour ranges or shape features [9]. For example, the work presented in [8] uses a simple intensity threshold to segment flowers, while the work presented in [5] uses a colour threshold. Another example is the work proposed by [4] where a shape based detection followed by a colour and texture classifier is applied to grape segmentation. The main drawback of these approaches is that they are specifically designed for a particular fruit variety and will need to be re-designed for different fruits or to cope with variations such as seasonal changes. Apart from the always present occlusion and illumination challenges, fruit classification becomes a simpler task when the fruit has a distinctive colour from the foliage. Green apples however, do not belong to that category. A pipeline for apple segmentation is presented in [7]. In order to control the illumination the authors collected data at night with an artificial source. The algorithm selects red apples using colour and green apples using specular reflection features. The fruit classification framework presented here has been designed with the aim of automatically adapting the feature sets for different fruits; the feature extraction and classification rules are all obtained via learning. Our approach does not require domain specific assumptions and can therefore be applied to different types of trees.
4
Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh
3 Fruit Classification Green apple detection using vision is challenging due to the low contrast of the fruit with the background foliage. In addition, outdoor environments present extra difficulties due to the variations in illumination caused by occlusion and different weather conditions. This paper applies the fruit segmentation algorithm previously demonstrated on almond trees [2], to apple segmentation and counting. The learning based approach allows the algorithm to be applied on fruit of different appearance. This section provides a brief description of the algorithm, interested reader please refer to [2] for details.
3.1 Image Modelling using Conditional Random Fields This paper models the image data using a Conditional Random Fields (CRF) framework [10]. The graphical model for an image consists of a two dimensional lattice G =< V , E > where V is a set of pixels representing the vertices of the graph and E is a set of edges modeling the relationships between the neighbouring pixels. Image segmentation is performed by assigning to every pixel xi ∈ V in the image a meaningful label li ∈ L . For multi-class image segmentation, the label set L may contain multiple labels up to k classes L ∈ {1, ..., k}. The optimal labeling l ? is obtained via energy minimisation on the graph structure G with the energy function defined as in Eq. 1 E(l) =
∑ ψi (li , xi ) + ∑ i∈V
ψi j (li , l j , xi , x j ) − log(Z(x))
(1)
(i, j)∈E
where ψi (li , xi ) is the unary potential which models the likelihood of a pixel taking a certain label, ψi j (li , l j ) is the pairwise potential which models the assumption that the neighbouring pixels should take the same label, and log(Z(x)) is the partition function. Conventionally the unary potential is computed using the features in the image, for example grey level intensity [11], colour [12], or texture [13]. In our approach the unary potential is generated using the multi-scale features learnt from the image dataset.
3.2 Multi-Scale Feature Learning This section provides an overview of our feature learning approach. We first apply a sparse autoencoder [14] at different scales to obtain the multi-scale feature dictionaries. A logistic regression classifier is then used to learn the label association
A Feature Learning Based Approach for Automated Fruit Yield Estimation
5
to the multiscale responses. The classifier output is then passed into the CRF as the unary term described in Eq. 1 for multi-class image segmentation. The first step is to use unsupervised feature learning to capture the features from an unlabelled dataset. In this paper a sparse autoencoder is used to learn the dictionaries from randomly sampled image patches. This process is then repeated with images sub-sampled at different scales. With the low dimensional dictionary code obtained using unsupervised feature learning, an additional supervised label assignment step can be used to train a classifier. In this paper a softmax regression classifier is used. Softmax regression was chosen because it can be formulated as a single layer perceptron and trained using back-propagation. Lastly the sparse autoencoder used for unsupervised feature learning and the logistic regression classifier are joined into one single network to fine-tune the network parameters.
3.3 Fruit Counting With the apple segmentation output, two approaches are used to estimate the fruit count. The first is to count the total number of fruit pixels per side of the row and use the pixel count to infer actual fruit count, the second approach is to perform circle detection using the circular Hough transform [15] to estimate the actual fruit count. Prior to detection, image erosion is applied to remove the noise and dilation is applied to join partially occulded fruit.
4 Experimental Setup The experiments were conducted using Shrimp; a general purpose perception research ground vehicle. The system is equipped with a variety of localisation, ranging and imaging sensors that are commonly used in machine perception, together with soil specific sensors for agricultural applications, see Fig. 1(a). Data from all sensors were gathered from a single 0.5ha block at a commercial apple orchard near Melbourne, Victoria in Australia, shown in Fig. 3. The data collection was performed one week before harvest with the apple diameter ranged from 70 to 76 mm. The orchard employs a modern G¨uttingen V trellis structure, whereby the crop is arranged in pairs of 2D planar trellises, arranged to form a ’V’ shape. The structure was designed to maximise sunlight on the leaves and to promote efficient manual harvesting, and potentially would be an appropriate structure for future robotic harvesters. The Shrimp robot acquired data while driving between the rows, as shown in Fig. 1(b). All fifteen rows in Fig. 3 were scanned, with a trajectory length of 3.1km and a total of 0.5T B of data. The data collection took approximately 3 hours. The experiments in this paper were conducted using the single front facing
6
Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh
camera of the Ladybug3 panospheric camera, seen at the top of the robot, because this camera has a sufficiently wide field of view to see the entire trellis face, while scanning from the centre of a row, as shown in Figs. 1(b). The data was acquired continuously at 5Hz, with an individual image resolution of 1232 × 1616 pixels. Other sensors are used as part of research in tree segmentation and farm modelling.
4.1 Image Analysis Two set of experiments were performed in this study. The experiments were designed to test both the generalisation of the algorithm for pixel classification, and to verify that the resulting fruit count did correlate to the ground truth. Multi-class segmentation was used to test the generalisation of the algorithm and binary apple/nonapple segmentation was used to evaluate the fruit counting performance. The dataset consists of over 8000 images with resolutions of 1232 × 1616 pixels. To simplify the labelling process each image was divided into 32 sub-images with 308 × 202 pixels. Overall 90 sub-images were hand labelled with pixel accuracy with multi-class labels, and 800 sub-images were labelled with pixel accuracy with binary apple/non-apple classes. These labelled sub-images were used for the segmentation algorithm training and 2-fold cross validation. Examples of the labelled images are shown in Figure 2.
Sub-Images
Binary
Multi-Class
Ground Truth Labels
Fig. 2 Example of image labels: 90 sub-images were hand labelled with pixel accuracy with multiclass labels, and 800 sub-images were labelled with pixel accuracy with binary apple/non-apple classes.
A Feature Learning Based Approach for Automated Fruit Yield Estimation
7
Apart from the individual class pixel classification accuracy, three evaluation metrics were applied, the global, average accuracy and the F measure. The global accuracy measures the number of correctly classified pixels of the entire dataset whereas the average accuracy measures the average performance over all classes. The F measure was computed by averaging the F-measure of the individual classes.
4.1.1 Multi-class Fruit Segmentation The first experiment performed multi-class segmentation with the same setting as [2] to classify fruits, leaves, branches, sky and ground. The main difference was that the fruit in this paper were apples instead of almonds. The objective of this experiment was to test the generalisation of the multi-class segmentation algorithm on different fruits, as well as the same fruit (apple) with different colours (red and green).
4.1.2 Apple Counting The second experiment was designed to evaluate the fruit counting performance. With this objective in mind the algorithm was re-trained to specialise in apple/nonapple binary classification. This was done because in general, the classification accuracy improves with decreasing number of classes. In addition to the binary class segmentation evaluation using the hand labelled image data, the apple farmer provided us with the ground truth count and weight over several rows of the apple farm shown in Fig. 3 by using an automated post-harvest weighing and counting machine. The apple trees are grown on a V shaped trellis known as G¨uttingen V. Each row has two sides, the rows are numbered sequentially and the A-sides are west facing whereas the B-sides are east facing. The ground truth counts were provided as totals for each side of each row, and were used to evaluate the correlation between the algorithm count and the ground truth. To normalise the pixels counts provided by the algorithm, all the images and pixel class probabilities were first undistorted using the camera calibration data. Secondly, by using the navigation solution the algorithm sampled images every 0.5 metres along each row to minimise overlap. Finally the total pixel count per side of the rows were normalised by dividing by the number of images sampled.
8
Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh
Fig. 3 The Map of the Farm: The apple trees are grown on a V shaped trellis (G¨uttingen V). The rows are numbered sequentially, each row has two sides and the A-sides are facing west whereas the B-sides are facing east. The robot surveyed the entire labelled area and the farmer provided the ground truth apple count and weight for each row face for the majority of the surveyed area.
5 Results 5.1 Multi-class Fruit Segmentation The multi-class segmentation results are shown in Fig. 4 and Table 1. The dataset consists of leaves, apples and tree trunks at different lighting conditions and scales. The algorithm was able to segment various objects and also the background scene (sky and ground). The confusion matrix shows that the majority of mis-classification occurs between the apple and leaf classes, this is largely due to the colour and texture similarity between the leaves and green apples. Table 2 also shows the overall performance in apple segmentation compared to almond segmentation shown in [2] using the same multi-scale feature learning algorithm. The segmentation algorithm performed slightly better on the apple dataset (F score 87.3) compared to the almond dataset (F score 84.8). The algorithm gen-
A Feature Learning Based Approach for Automated Fruit Yield Estimation
9
Fig. 4 Apple orchard multi-class segmentation qualitative results: This dataset was collected during an orchard surveying mission aiming to automate the yield estimation and harvesting process. The class colour code: Apples are red, leaves are green, branches are brown, sky is blue and the ground is yellow.
leaves apples trunk ground sky
leaves apples trunk ground sky 96.7 25.2 17.4 7.4 0.6 2.0 71.0 7.7 0.1 0.1 0.5 2.2 71.8 1.9 0.1 0.3 0.6 2.6 86.9 0.0 0.4 1.1 0.5 3.6 99.1
Table 1 Confusion matrix for pixel multi-class classification: The classifier performs well in the leaves and the sky classes. The majority of mis-classification occurs between leaves and green apples.
F Measure
Average
Global
Sky
Ground
Trunk
Fruits
Leaves
eralised well on two different fruit applications, and both show that fruit and leaves are the hardest to classify.
Almond [2] 94.0 69.8 72.8 89.1 95.8 86.9 84.3 84.8 Apple 96.7 71.0 71.9 86.9 99.1 92.5 85.1 87.3 Table 2 Multi-class segmentation performance comparison between previous work on almond and this paper on apple. The learning algorithm performed well across different types of fruit with different appearances.
10
Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh
5.2 Apple Fruit Counting Fig. 5 shows the binary apple/non-apple pixel classification results. The classification algorithm took the image collected by the robotic platform shown in (a), returned the probability of each pixel belonging to the apple class shown in (b), the class probability map was then thresholded at 95% , and erosion was applied to clean up the noise and dilation to join partially occluded apples shown in (c), followed by the apple detection using circular Hough transform shown in (d). As expected the classification accuracy improved as the number of classes was reduced from 5 down to only apple and non-apple. For the apple counting application, the algorithm was re-trained to only perform binary classification between apple and non-apple class. The classification accuracy was 93.3 % for the apple class and 87.7 % for the non-apple class. Compared to the multi-class experiment where the apple classification accuracy was 71%. The normalised pixel count was computed by summing the apple pixels in the images for each side of the rows and dividing by the number of images. Separately, the apple count was generated by detecting circular regions in the class map using circular Hough transform. Overall the algorithm undercounted the fruit due to shading and occlusion, however the undercounting was consistent. This was demonstrated in the comparison between the pixel and fruit count produced by the algorithm, and the ground truth fruit count and weight, shown in Fig. 6. Positive correlations can be observed in all plots, with the strongest correlation (R2 = 0.810) between the algorithm fruit count and the ground truth fruit count. Overall higher correlation was observed between the algorithm and ground truth when comparing to fruit count rather than weight. Finally the algorithm fruit count can be calibrated using the linear equation relating ground truth to the algorithm output. The calibrated fruit count is shown in Fig. 7. The algorithm and ground truth match well for the first 10 sides and has a poor match for three sides 8a, 7b and 6a in particular.
6 Discussion The results showed that the algorithm performed well for the apple fruit segmentation tasks. In this paper the multi-class segmentation achieved 71 % accuracy on the apple class, and when re-trained to specialise in apple/non-apple classification the accuracy improved to 93.3 %. This paper also showed that the apple count can be estimated by using the pixel classification output. Although the algorithm undercounted the apples due to factors such as occlusion and shading, the algorithm output was consistent. The results showed high correlation between the algorithm fruit count to the ground truth fruit count provided by the apple farmer. There were several outliers and it is ongoing work to determine the root cause in these cases, to make the algorithm more robust. The farmer also reported logistical problems with the harvest weighing procedure, as it is not standard practice to mea-
A Feature Learning Based Approach for Automated Fruit Yield Estimation
a)
11
b)
200
200
400
400
600
600
800
800
1000
1000
1200
1200
1400
1400
1600
1600 200
400
600
800
1000
1200
c)
200
400
600
800
1000
1200
200
400
600
800
1000
1200
d)
200
200
400
400
600
600
800
800
1000
1000
1200
1200
1400
1400
1600
1600 200
400
600
800
1000
1200
Fig. 5 Apple Pixel: a) Original Image. b) Classifier confidence output, higher confidence is shown in red and lower confidence is shown in blue. c) Confidence threshold at 0.95 and with morphological operations to clean up the noise d) Apple detection with circular Hough transform.
12
Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh 4
7(5
4
NormalisedbPixelbCountbVSbGroundbTruthbCount
xb1)
7(5
NormalisedbPixelbCountbVSbGroundbTruthbWeight
xb1)
14a
14a 7 12a
2
6(5
yb=b3(867xbPb2)8)6(692pbR =b)(8)6
13a 13b
NormalisedbPixelbCountbperbImage
NormalisedbPixelbCountbperbImage
7
6 1)a 9b 11a 1)b
5(5
11b
5 4(5
12b
7b
5b 6a
4
8a
3(5
12a 13a 13b
2
6(5
yb=b27(266xbPb17824()23pbR =b)(742
6 1)a
5(5
11a 11b
5 4(5
12b 1)b 9b 7b
5b 6a 8a
4 3(5 3
3
9a
9a 2(5 2)))
4)))
12)))
1)))) 8))) 6))) GroundbTruthbFruitbCount
2(5 4))
14)))
16))
14)) 12)) 1))) 8)) GroundbTruthbFruitbWeightbRkgN
6))
AlgorithmbCountbVSbGroundbTruthbWeight
AlgorithmbCountbVSbGroundbTruthbCount 8)))
8)))
13a
13a 12a 7)))
yb=b)(492xbPb1352(158pbR2 =b)(81)
12a
13b 14a
7)))
6))) 11a 1)a 11b 5)))
9b 1)b 7b
4)))
5b
yb=b3(378xbPb1)91(23)pbR2 =b)(726
12b
AlgorithmbFruitbCount
AlgorithmbFruitbCount
18))
6a 8a
13b
14a
12b
6))) 11a 1)a 11b 1)b
5)))
9b
7b 4)))
5b
8a6a
3)))
3)))
9a
9a 2))) 2)))
4)))
12)))
1)))) 8))) 6))) GroundbTruthbFruitbCount
14)))
2))) 4))
6))
8))
14)) 12)) 1))) GroundbTruthbFruitbWeight
16))
18))
Fig. 6 Algorithm counts versus ground truth counts: Positive correlation is observed in all plots. The strongest correlation occurred between the algorithm fruit count and the ground truth fruit count, demonstrating the algorithm’s ability to infer actual fruit count. The row naming convention is according to the map shown in Fig. 3. CalibratedsYieldsEstimationsandsGroundsTruthsforsGreensandsRedsApples 14000 GroundsTruthsFruitsCount AlgorithmsFruitsCount
12000
ApplesCount
10000 8000 6000 4000 2000 0s
14a
13b
13a
12b
12a
11b
11a
10b RowsName
10a
9b
9a
8a
7b
6a
5b
Fig. 7 Calibrated algorithm count versus ground truth count: The algorithm and ground truth fruit count matched well in the first 10 sides. The row naming convention is according to the map shown in Fig. 3.
A Feature Learning Based Approach for Automated Fruit Yield Estimation
13
sure the yield per row and we are working together to produce the most accurate ground truth for the next season. This work focused on fruit segmentation and counting in outdoor natural lighting conditions. The same method used here could be applied to natural or controlled lighting, as long as training data were supplied for the chosen condition. We expect the performance of this method to be higher in controlled conditions than under natural lighting, due to the increased stability of the appearance of the fruit, but this remains to be verified experimentally. One limitation of the algorithm is that it currently does not run in real time, instead it runs at 30 seconds per frame. This is acceptable for yield estimation tasks but will need further optimisation for future autonomous harvesting applications. There are two potential solutions. The first is to down sample the image, the current image resolution of 1232 × 1616 pixels is more than enough to resolve large fruits such as apples. The second is to process a smaller subset of frames with reduces overlap, while still maintaining full coverage.
7 Conclusion This paper presented an approach for multi-class image segmentation for an apple fruit segmentation application. This algorithm has been successfully applied to almond segmentation previously, and this paper proved that by providing new training data, the algorithm can be generalised to apple segmentation without modification. In addition, the experiments showed that by sampling the classification output with the aid of the navigation data, the algorithm is able to provide reliable apple yield estimation. Compared to the existing work, the algorithm developed in this paper is able to work in natural lighting conditions and is able to detect both red and green apples using images from a monocular camera. Acknowledgements We would like to thank Kevin Sanders for his support during the field trials at his apple orchard. This work is supported in part by the Australian Centre for Field Robotics at the University of Sydney and Horticulture Australia Limited through project AH11009 Autonomous Perception Systems for Horticulture Tree Crops.
References 1. Q. Zhang and F. J. Pierce, Agricultural Automation: Fundamentals and Practices. CRC PressI Llc, 2013. 2. C. Hung, J. Nieto, Z. Taylor, J. Underwood, and S. Sukkarieh, “Orchard Fruit Segmentation using Multi-spectral Feature Learning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.
14
Calvin Hung, James Underwood, Juan Nieto and Salah Sukkarieh
3. D. Dey, L. Mummert, and R. Sukthankar, “Classification of plant structures from uncalibrated image sequences,” in Applications of Computer Vision (WACV), 2012 IEEE Workshop on. IEEE, 2012, pp. 329–336. 4. S. Nuske, S. Achar, T. Bates, S. Narasimhan, and S. Singh, “Yield estimation in vineyards by visual grape detection,” in Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011, pp. 2352–2358. 5. A. Payne, K. Walsh, P. Subedi, and D. Jarvis, “Estimation of mango crop yield using image analysis–segmentation method,” Computers and Electronics in Agriculture, vol. 91, pp. 57– 64, 2013. 6. D. Bulanon, T. Burks, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosystems Engineering, vol. 103, no. 1, pp. 12–22, 2009. 7. Q. Wang, S. Nuske, M. Bergerman, and S. Singh, “Automated crop yield estimation for apple orchards,” in 13th International Symposium on Experimental Robotics, 2012. 8. A. Aggelopoulou, D. Bochtis, S. Fountas, K. C. Swain, T. Gemtos, and G. Nanos, “Yield prediction in apple orchards based on image processing,” Precision Agriculture, vol. 12, no. 3, pp. 448–456, 2011. 9. A. Jimenez, R. Ceres, J. Pons et al., “A survey of computer vision methods for locating fruit on trees,” Transactions of the ASAE-American Society of Agricultural Engineers, vol. 43, no. 6, pp. 1911–1920, 2000. 10. R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother, “A comparative study of energy minimization methods for markov random fields with smoothness-based priors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1068–1080, 2007. 11. Y. Boykov and M. Jolly, “Interactive graph cuts for optimal boundary & region segmentation of objects in nd images,” in Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, vol. 1. IEEE, 2001, pp. 105–112. 12. C. Rother, V. Kolmogorov, and A. Blake, “Grabcut: Interactive foreground extraction using iterated graph cuts,” in ACM Transactions on Graphics (TOG), vol. 23, no. 3. ACM, 2004, pp. 309–314. 13. L. Ladicky, C. Russell, P. Kohli, and P. Torr, “Associative hierarchical crfs for object class image segmentation,” in Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 739–746. 14. H. Lee, C. Ekanadham, and A. Ng, “Sparse deep belief net model for visual area v2,” Advances in neural information processing systems, vol. 19, 2007. 15. T. J. Atherton and D. J. Kerbyson, “Size invariant circle detection,” Image and Vision computing, vol. 17, no. 11, pp. 795–803, 1999.