Web Image Mining Age Estimation Framework

0 downloads 0 Views 997KB Size Report
that, we use the bio-inspired features (BIF) to extract ... superiority of our proposed image web mining ... for the age feature extraction; (4) automatic face.
ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

Web Image Mining Age Estimation Framework Mohamed Y. Eldib and Hoda M. Onsi Faculty of Computers and Information, Cairo University, Egypt [email protected], [email protected]

there is a need for a huge human age estimation database with true age labels for better age estimator.

Abstract In this paper, we introduce a fully automated age estimation engine that is capable of collecting images using human age related text queries from Flickr photo sharing website that has various ancestry groups and different image qualities, 37000 images were downloaded from this step. We use the Active Shape Model for robust face detection; it acts also as a removal step for non-face images. After that, we use the bio-inspired features (BIF) to extract the facial aging information. We introduce a universal labeler algorithm to label Flickr images automatically. Finally, we use the web image collection as a training dataset, and the standard databases as testing datasets showing the superiority of our proposed image web mining algorithm, over the state-of-the-art methods.

In this paper, we demonstrate the following contributions: (1) automatic web image collector that is capable of downloading thousands of images from Flickr photo sharing website using human age related text queries; (2) Noise removal and face detection by using the Active Shape Model (ASM) [4]; (3) The bio-inspired features (BIF) [1] is being used with non-standard unlabeled datasets for the age feature extraction; (4) automatic face cropping; and (5) universal labeler algorithm for the task of labeling the unlabeled images. The paper is organized as follows. In section 2; we present the related work. Section 3 gives an overview of the proposed web image mining age estimation algorithm. Section 4 presents the main components of the preprocessing stage in a detailed manner. Section 5 introduces the robust universal label algorithm. Section 6 describes our results on the publicly available and, commonly experimented with FG-NET [5] and MORPH face aging databases [6]. Finally, section 7 contains the conclusion.

Keywords: Human age estimation, Active Shape Model, Facial feature extraction.

1. Introduction The internet images are not captured or tagged in a random nature, because the photographer captures images not in a random fashion, but rather to remember or document memories with family members or friends in their life. The photographer shares these images with others on the internet with descriptive titles or tags. These images form a rich repository for a lot of computer vision problems such as gender recognition, face recognition and age estimation. There has been relatively little work, to date, concerning automatic age estimation, despite having potential useful applications like building a human computer interaction age estimator (HCI-AE) system [1] [2] or electronic customer relationship management (ECRM) [3]. So,

2. Related work The existing age estimation frameworks using images consist of two stages: image representation and age estimation. For age image representation, the anthropometric model [7], and active appearance model (AAM) [8], AGing pattErn Subspace (AGES) [9][10], age manifold [11][12], patch-based appearance model [13][14], tensorial representation from Gaussian receptive field [22], and bio-inspired features (BIF) [1] are most popular

1

ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

-

ones. Given an aging feature representation, the next step is to estimate ages. The age estimation can be viewed as a classification-based [7][9][11][12][19][23] and regression-based [1] [12] [15][16][20][22][23][24]. Xin Geng et al. [26] gives in depths tutorial on human age estimation techniques, and the performance evaluation of each one of them. These approaches have achieved satisfactory human age estimation results on certain benchmark human aging datasets such as FG-NET [5], MORPH databases [6] and YGA [7] but, these datasets lack of the generalization, because Age estimation approaches are evaluated on small sized datasets, because it is usually hard or even impractical to collect a large database of large amount of subjects who can provide a series of personal images in different ages. And, Most of the age estimation approaches depends on the manual annotation of each image exists in the dataset for example FG-NET provides each face image with 68 labelled points characterizing shape features. This lacks the capability of cropping the non-annotated images which yield to something called face misalignment issue. Ni et al. [18] are the first who addressed the limitation of the standard datasets. They proposed the Multi-instance Regression, which can deal with label outliers, when build a universal age estimator based on a large web collected database. This system can achieve reasonable age estimation accuracy with automatic online training. Metric learning has also been considered to discover the latent semantic similarity between age data, which can potentially improve the regression results, the Web Collected Database contains 219,892 faces (in 77,021 images) from Flickr.com and Google.com image search engine. The age label is distributed from 1 to 80 based on age related queries. This database is the largest one ever reported for age estimation via faces.

The problem of multi-instance faces in the same image with possibly incorrect label of the image. This motivated us to design the universal labeler algorithm for efficient and effective image labeling.

Figure 1. Web image mining

3. System overview A system overview is illustrated in figure 1. The frame work is divided into main stages; a) the preprocessing stage which includes four blocks: 1) web image collector to harness the photo sharing website as Flickr for web image collection with possibly correct age labels; 2) noise filtering and face detection using the Active Shape Model (ASM); 3) precise face cropping and (4) age feature extraction using the bio-inspired features (BIF). b) The processing stage which includes two blocks: 1) universal labeler algorithm that is able to label images with nearly-correct age labels and 2) a web image mining age estimator for training web image collection using classification-based model and regression-based models that does not require any kind of manual human interaction.

4. Preprocessing stage 4.1. Web imag e collector The Internet aging image collection is performed by automatically crawling images from the photo sharing websites such as Flickr based on a set of age related text queries. In this step, we limit the results by taking the first 500 images for each queried age; generally more than 5000 images can be downloaded for each age group. The total number of the downloaded images is 37000 images for ages ranging from 0 to 70 years old. We have noticed that a large number of images do not contain any faces; because the web image collector may return images for places, objects and animals based on the uploaded personal media data with informative titles and tags. So, it is often that the age information is included in those irrelevant images as shown in figure 2.

There are some challenging issues that we have addressed in this work: - The limited number of pictures in the existing databases opened the door for people to use the internet to collect images with correct true labels. A lot of sites such as Flickr have a large set of images that can be retrieved with human age queries with different various ancestry groups - The misaligned faces could be solved by using the Active Shape Model (ASM) to locate the correct facial landmarks for the face images.

2

ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

promising performance in the task of age. The BIF has two layers: 1) the S layer; and 2) the C layer as shown in figure 4.

Figure 4. Age feature extraction

4.3.1. S1 Layer S is created with a Gabor filtering on the cropped image output from the Active Shape Model (ASM) block [4] with 8 orientations and 16 scales. Gabor functions for a particular scale (sigma) and orientation (theta) are described by the following equation:

Figure 2. An illustration for relevant/irrelevant images result from a related age text query (query: “31 years old”)

4.2. Noise filtering and face detection In this step, we want to filter out the noisy images (the images without faces) and end up with the ones that contain human faces. Those faces shall be detected in a robust way. We use the Active Shape Model for noise removal and face detection. Then, the detected faces are being well cropped. We aim at accurately localizing the facial region to extract faces only from the relevant parts of the input image using the Active Shape Model [4] which has two main stages, namely training and fitting. In the training stage, we manually locate facial landmark points for hundreds of images [17] in such a way that each landmark represents a distinguishable point presenting on every example image. We use 75 points which is provided by [17] that cover the whole face. Then, we build a statistical shape model based on these annotated images. Having built the shape model, points on the incoming face image are fitted. First, we filter out the noisy images by detecting the face(s) in the input image. Second, we initialize the shape points and do image alignment fitting to get the detected shape features automatically. Finally the input image is well cropped to the area covered by the ASM fitted landmark points as shown in Figure 3.

 

G(x,y)=exp

 

Where X =   .

(1)

and Y =

We adjusted to vary between 0 and    . The parameters (wavelength), (effective width), and (aspect ratio) =0.3 are based on the work in [1]. The starting filter size is (3x3) has been chosen due to experiments selection which reveals facial characteristics. 4.3.2. C1 Layer Gabor filtered outputs can serve as candidate features for the age estimation problem. However, they are of a very high dimension leading to difficulties in training. In addition, there are redundancies in the Gabor filter outputs. Hence, a usually adopted scheme is to summarize the outputs of the Gabor filters using some statistics measure. Here, we adopt the second layer C that is used in [1] and proven to work quite well. It has two operations, namely the maximum “MAX” and standard deviations “STD” with a variation on the MAX definition by avoiding image subsampling to keep local variations which might be important for characterizing facial details (e.g., wrinkles, creases and muscles drop). F = max x , x

Figure 3. An exemplary result from the Active Shape Model (ASM) block. Where, row (a) show the original images and (b) h th i df

(2)

Where F corresponds to the maximum value of two adjacent filters in the same scale in the  S layer band at pixel

4.3. Age feature extraction For those automatically collected and cropped face images, it is important to have a robust feature representation for final age estimation performance. We use the bio-inspired features (BIF) for its

std =

3

N

N

N  

 ∑

N  

F

 F

(3)

ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

Where x and x are the filtered values with scales and j 1 at pixel . F is the mean value of the filtered values within pooling grid size of N N The process of the two nonlinear operations “MAX” and “STD” is done for each orientation and each scale band independently. For instance, consider the first band: S 1. For each orientation, it contains two S maps: the one obtained using a filter of size 3 3 and one the other one obtained using a filer of size 5 5. The two maps have the same width and height but with different values, because different filters are applied. First, we take a max over the two scales by recording only the maximum value from the two maps leading to a maximum map through “MAX” operation. Then, the “STD” operation is used as suggested in [1] on the maximum map using a cell grid of size N N 6 6. From each grid cell one single measurement is obtained from 36 elements. C responses are computed with interval size equals to ∆ in one direction where in our case there is just one overlapping between two neighboring grid cells in the vertical direction and with zero overlap in the horizontal direction.

the experimentally selected parameters provided in Table 1. There are seven age groups with different range of ages as illustrated in Table 2. So, the training data vector may contain one or more age groups that are used for building the training models. Table 3 shows the relation between the training data vector and age groups in different LabelGroup (LG). The unlabeled data vector contains the cropped face images that will be used as input with training models. Having built the training models, the processed data vector is the output result that contains the cropped face images that are being labeled based on the average of their estimated ages that result from the trained models for each LabelGroup (LG). SVR =1,5,30,60,240,1000 γ

0.000308, €=0.1, e

SVM =1000 0.002

Table 1. SVR and SVM model

Figure 5 gives a detailed explanation of the labeling process for each component. In UGL, The cropped face dataset images (unlabeled data vector) are grouped in LG . The LG models are trained on all age groups (training data vector). Then, the result of using the unlabeled data vector as input with the training model will form two LabelGroup (LG) (processed data vector): 1) LG contains the face images that are less than 30 years old, where, the grouped images have age groups or range levels (0-9, 10-19 and 20-29); and 2) LG contains the other face images that are greater than 30 years old where, the rest of the grouped images have age groups (30-39, 40-49, 50-59 and 60-69).

5. Processing stage 5.1. Universal labeler algorithm Nabil et al. [25] proposed a system to estimate the age to be within certain ranges based on neural networks. Their system is decomposed of two steps: 1) age is classified into four groups: babies, young adults and senior adults; 2) each group is classified into more specific age ranges which is the most related work with our research in this work. This approach does not consider the unlabeled pictures issue, and therefore the algorithmic robustness cannot be guaranteed. There is a large portion of images contain multiple faces in the same image. So, there is no confidence to label those multiple faces using the age related queries. That is the reason to present universal labeler that is capable of labeling the multiple faces and single face images. The main components of the proposed universal labeler (UL) are: 1) Universal Generic Labeler (UGL); 2) Universal Range Labeler (URL); 3) Universal Assigning Labeler (UAL); and 4) Universal Specific Labeler (USL). In UL components, we group the unlabeled face images and the training images into LabelGroup (LG) structure, where in each LabelGroup (LG), there are three vectors: 1) training data vector; 2) unlabeled data vector; and 3) processed data vector. In training data vector, we combine FG-NET [5] and MORPH[6] datasets using the selection criteria in [21] to build six SVR models and one SVM model using

Range Level (RL)

Age Group

1 2 3 4 5 6 7

0-9 10-19 20-29 30-39 40-49 50-59 60-69

Table 2. Range levels with their corresponding age groups

After this stage, we use two instances of URL, one for the grouped images in LG (unlabeled data vector) where the models are trained on range level 1, 2 and 3(training

4

ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

2 (training data vector). So, the LG images are separated into two LabelGroup (LG) (processed data vector): 1) LG contains the face images that are less than 10 years old, where, the face images have age group (0-9); and 2) LG contains the other face images that are greater than 10 years old where, they have the other age group (10-19). In the second instance of UAL, The images are grouped in  LG (unlabeled data vector). The models are trained on range level 2 and 3 (training data vector). So, The LG cropped face images are separated into two LabelGroup (LG) (processed data vector): 1) LG contains the face images that are less than 20 years old, where, they have age group (10-19)l and 2) LG contain the other face images that are greater than 20 years old where, they have age group (20-29). In the third instance of UAL, the images are grouped in LG forming the unlabeled data vector. The models are trained on range level 4 and 5 forming the training data vector. So, the LG images are separated into two LabelGroup (LG) (processed data vector): 1) LG contains the face images that are less than 40 years old where, they have age group (30-39); and 2) LG contains the other face images that are greater than 30 years old where, they have age group (40-49). The last instance of UAL, The images are grouped in the LG   (unlabeled data vector). The models are trained on range level 6 and 7 (training data vector). So, the LG images are separated into two LabelGroup (LG) (processed data vector): 1) LG contains the face images that are less than 50 years old where, they have age group (50-59); and 2) LG contains the other face images that are greater than 50 years old where, they have age group (60-69).

Figure 5. Detailed labeling

data vector). So, the images in LG are grouped further into two LabelGroup (LG) (processed data vector): 1) LG contains the face images that are less than 15 years old, where, the grouped images have age groups (0-9 and 10-19); and 2) LG contains the other grouped face images that are greater than 15 years old where, they have the rest of the age groups (10-19 and 20-29). In the second instance of URL, The images are grouped in LG (unlabeled data vector). The models are trained on range level 4, 5, 6 and 7 that are in LG (training data vector). The images in LG are separated in two more LabelGroup (LG) (processed data vector): 1) LG contains the face images that are less than 50 years old where, the grouped images have age groups (30-39 and 40-49); and 2) LG contains the other face images that are greater than 50 years old where, They have the rest of the age groups (50-59 and 60-69). LabelGroup (LG)

Range Level (RL) 1,2,3,4,5,6,7 1,2,3 4,5,6,7 1,2 2,3 4,5 6,7 1 2 3 4 5 6 7

Finally, we use seven instances of USL, one for each LabelGroup  LG ,LG , LG , LG , LG , LG and LG . where the images are grouped in each one of them forming the unlabeled data vectors. The models of each LabelGroup are trained on each RangeLevel separately forming the training data vectors:    

: :

, : and

,  :

:



:



:

, (4)

Where LG:RL denotes the relation between each LabelGroup and RangeLevel. The cropped face images in each LG from USL will be finally labeled by taking the average over the estimated ages.

Table 3. LabelGroup (LG) with their corresponding range levels (RL)

Next, we use four instances of UAL, one for the grouped images in LG   (unlabeled data vector), where, the models are trained on range level 1 and

5.2. Web imag e mining ag e estimator We crawled 37,000 images from Flickr.com photo sharing website based on a set of age related 5

ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

Range Rng Level 1 0-9 2 10-19 3 20-29 4 30-39 5 40-49 6 50-59 7 60-69 Total

queries for ages ranging from 0 to 70,. We have totally detected 9,250 faces by using the Active Shape Model (ASM) block, about 3/4 of the images (irrelevant images) are removed. We labeled these images using our universal labeler (UL). To build the final web image mining age estimator, we use classification based and regression-based methods

6. Experiments 6.1. Evaluation measures We used two measures to evaluate age estimation performance: (1) Mean Absolute Error (MAE) and (2) Cumulative score (CS).The MAE is defined as the average of the absolute errors between the estimated ages and the ground truth ages. MAE

∑N

^

N

#img.

UL

372 338 144 79 46 15 8 1002

1.59 2.45 5.30 10.15 14.93 13.93 18.62 3.97

RMIR [18] 10.98 8.15 6.05 7.92 13.42 22.75 29.96 9.49

GKR [18] 21.98 20.69 15.69 8.96 6.40 9.54 14.69 18.46

Table 4. MAE (years) at different age groups for UL, and RMIR and GKR on FG-NET

Range Rng Level 1 0-9 2 10-19 3 20-29 4 30-39 5 40-49 6 50-59 7 60-69 Total

(5)

Where l   is the ground truth age for the test is image k andl^ is the estimated age and N is the total number of test images. The cumulative score CS(j) is defined  N

100%  whereN is the number of test images as N on which the age estimation makes an absolute error no higher than j years.

#img.

UL

343 763 428 124 25 7 1690

6.49 4.30 6.26 11.91 16.78 26.6 5.99

RMIR [18] 9.52 6.62 5.32 10.74 17.49 34.77 7.42

GKR [18] 21.98 18.03 12.21 7.28 5.70 10.67 16.60

Table 5. MAE (years) at different age groups for UL, and RMIR and GKR on MORPH.

7. Conclusion

6.2. Datasets We use the FG-NET [5] and MORPH [6] aging databases which are publically available to evaluate the performance of our approach. The FG-NET contains 1002 face images of 82 subjects with ages ranging from 0 to 69 with large variation of lighting, pose, and expression. While, The MORPH database contains 1690 face images of 515 subjects ranging in age from 15 to 68 years for men and women of various ancestry groups.

We used the internet as a rich repository for collecting images to build a global dataset that contains various ancestries with ages ranging from 0 to 70. First, we downloaded 37,000 images via a set of age related queries. Then, we filtered them out using the Active Shape Model (ASM), ending up with a clean cropped database with 9,250 faces. Next, we extracted the aging features of the cropped image faces using the bio-inspired features (BIF). Finally, a robust universal labeler was presented for labeling the cropped image. We showed experimentally the superiority of the proposed contributions. Evaluated on the FG-NET and MORPH benchmark databases, our algorithm achieved high accuracy in estimating human ages compared to the published methods.

6.3. Results Our proposed algorithm is trained on the constructed web image collection database (cropped faces after labeling them using our universal labeler). Then, the aging features are extracted using the bio-inspired features (BIF). The cropped images are resized to 59 80 graylevel images. We build six SVR models and one SVM model based on the experimentally selected parameters provided in Table 1. The models are trained on web image collection database using cascade of classification and regression as suggested in [21]. We measure the performance of our trained web image collection database on the FG-NET (1002 images) and MORPH (1690 images) aging database as test sets. Experimental results are shown in Table 4 and 5 for FG-NET and MORPH respectively, it is illustrated that the MAEs from our proposed method is (3.97 and 5.99), and our proposed universal labeler (UL) outperforms the robust multi-instance learning [18] based regressor (RMIR) and the direct Gaussian kernel regression (GKR) [18] significantly. The poor performance of RMIR is mainly caused by the absence of using trained datasets in the learning process.

8. References [1]

Guo, G.D., Mu, G.W., Fu, Y., Huang, T.S. “Human age estimation using bio-inspired features”. IEEE Conference on Computer Vision and Pattern Recognition (IEEE CVPR’09), 2009 [2] Guo, G. Mu, Y. Fu, C. R. Dyer and T. S. Huang, “A study on automatic age estimation using a large database”, IEEE International Conference on Computer Vision (IEEE ICCV’09), 2009. [3] Guo, G. Mu, Y. Fu, C. R. Dyer and T. S. Huang, “Is gender recognition influenced by age?” IEEE International Conference on Computer Vision

6

ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

[4]

[5] [6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16] A. Lanitis, C. Taylor, and T. Cootes, “Toward automatic simulation of aging effects on face images,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. 24, no. 4, pp. 442–455, Apr. 2002. [17] Y. Wei “Research on Facial Expression Recognition and Synthesis”, Master Thesis, Nanjing University, 2009. [18] B. Ni, Z. Song, and S. Yan, “Web Image Mining Towards Universal Age Estimator,” Proc. ACM Multimedia, 2009. [19] M. Chandra Mohan and V. Vijaya Kumar and A. Damodaram. “Adulthood Classification based on Geometrical Facial Features”. In ICGST International Journal on Graphics, Vision and Image Processing, GVIP, volume 10, issue II, pages 39-44. June 2010 [20] S. Yan, H. Wang, T. S. Huang, and X. Tang. “Ranking with uncertain labels”. In IEEE conf. on Multimedia and Expo, pages 96–99, 2007. [21] M. Y. Eldib and H. M. Onsi, “Human age estimation framework using different facial parts”. 2010. [22] John Ruiz Hernandez, James Crowley, and Augustin Lux. “How old are you?: Age Estimation with Tensors of Binary Gaussian Receptive Maps”. In Proceedings of the British Machine Vision Conference, pages 6.1-6.11. BMVA Press, September 2010. doi:10.5244/C.24.6. [23] G-D. Guo, Y. Fu, C. Dyer, and T. Huang, Imagebased human age estimation by manifold learning and locally adjusted robust regression, IEEE Trans. on Image Processing, Vol. 17, No. 7, 1178-1188, July, 2008. [24] Leting Pan, “Human Age Estimation by Metric Learning for Regression Problems.” In Proceedings of the 7th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR '09), 455-465. [25] N. Hewahi, A. Olwan, N. Tubeel, S. El-Asar and Z. Abu-Sultan, “Age Estimation based on Neural Networks using Face Features”. Journal of Emerging Trends in Computing and Information Sciences, volume. 1, issue. II, 6167. October 2010. [26] X. Geng, Y. Fu, and K. Smith-Miles, “Automatic Facial Age Estimation,” Tutorials: The Pacific Rim International Conference on Artificial Intelligence (PRICAI), 2010.

(IEEE ICCV’09), Workshop on Human Computer Interaction (HCI), 2009. T. F. Cootes, C. J. Taylor, D. H. Cooper and J. Graham.“Active Shape Models –their Training and Application”. In Computer Vision and Image Understanding. Vol. 61, No. 1, Jan. 1995, pp. 38-59 FG-NET aging database. In http://www.fgnet.rsunit.com/. K. Ricanek and T. Tesafaye, “MORPH: A longitudinal image database of normal adult age-progression,” IEEE 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, April 2006, pp 341-345. Y. Kwon and N. Lobo. Age classification from facial images. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages 762-767, 1994. Y. Fu, Y. Xu, and T. S. Huang, “Estimating human ages by manifold analysis of face pictures and regression on aging features,” IEEE International Conference on Multimedia & Expo (IEEE ICME’07), pp. 1383-1386, 2007. X. Geng., Z.-H.Zhou., and K. Smith-Miles. “Automatic age estimation based on facial aging patterns”. IEEE Trans. Pattern Anal. Machine Intell., 29(12):2234–2240, 2007. X. Geng, Z.-H. Zhou, Y. Zhang, G. Li, , and H. Dai. “Learning from facial aging patterns for automatic age estimation”. Proc. 14th ACM Int'l Conf. Multimedia, pp. 307-316, 2006. A. Lanitis, C. Draganova, and C. Christodoulou.“Comparing different classifiers for automatic age estimation”. IEEE Trans. on SMC-B, 34(1):621–628, 2004. Guo, G. Mu, Y. Fu, C. R. Dyer. “Locally adjusted robust regression for human age estimation”. IEEE Workshop on Application of Computer Vision (IEEE WACV’08), 2008. S. Yan, X. Zhou, M. Liu, M. Hasegawa-Johnson, and T. Huang. “Regression from patch-kernel”. In IEEE conf. on CVPR, 2008. S. Yan, M. Liu, and T. Huang. “Extracting age information from local spatially flexible patches”. In IEEE conf. on ICASSP, pages 737– 740, 2008. J. Suo, S. Zhu, S. Shan and X. Chen, “A Compositional and Dynamic Model for Face Aging”.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009.

7

ICGST-GVIP Journal, Volume 11, Issue 1, March 2011

Prof. Dr. Hoda Mohamed Onsi, Vice Dean for Higher Studies and Research, Faculty of Computers and Information, Cairo University, 2008. Chairperson of Information Technology Department, Faculty of Computers and Information, Cairo University (2006-2008). Ph.D. Degree in computer engineering, Electronics and Communications Department, Faculty of Engineering, Cairo University, 1994. Acting Head of Department of Data Reception, Analysis, and Receiving Station Affairs, National Authority for Remote Sensing and Space Sciences (NARSS), Cairo, Egypt (1994- 2000).

Biographies Eng. Mohamed Yehia Eldib received the B.E. degree in information technology department from faculty of computers and information (FCI) Cairo University, Egypt, in 2008. He is currently pursuing his M.S. degree in information technology department, (FCI). He is working in Giza Systems Integration Company since May 2010 as a software testing engineer. He has published three research publications in various National and International Journals and conferences.

8

Suggest Documents