FITTING A RECTANGULAR FUNCTION BY ...

2 downloads 0 Views 282KB Size Report
APPLICATION TO THE MULTIVARIATE NORMAL INTEGRALS. HATEM A. FAYED1, AMIR F. .... normal distribution with mean µi and variance σ2 i , and ωi is the ...
Appl. Comput. Math., V.14, N.2, 2015, pp.194-203

FITTING A RECTANGULAR FUNCTION BY GAUSSIANS AND APPLICATION TO THE MULTIVARIATE NORMAL INTEGRALS HATEM A. FAYED1 , AMIR F. ATIYA2 , ASHRAF H. BADAWI3 Abstract. This article introduces a new scheme to express a rectangular function as a linear combination of Gaussian functions. The main idea of this scheme is based on fitting samples of the rectangular function by adapting the well-known clustering algorithm, Gaussian mixture models (GMM). This method has several advantages compared to other existing fitting algorithms. First, it incorporates an efficient algorithm that can fit more Gaussian functions. Second, weights of the linear combination are already constrained in the algorithm to lie in the interval [0,1], which avoids large/small values that cause numerical instability. Third, almost the entire fitted Gaussian functions lie within the interval of the rectangular function, which can be utilized efficiently to approximate difficult definite integrals such as the multivariate normal integral. Experiments show that it is efficient when low accuracy is required (error of order of 10−4 ) especially for small values of the correlation coefficients. Keywords: Function Approximation, Gaussian Functions, Gaussian Mixture Models, Multivariate Normal Integrals. AMS Subject Classification: 41Axx, 65D15, 65D30.

1. Introduction The problem of function approximation is widely used in many areas of science and engineering where numerical techniques are incorporated. Common approaches involve Taylor series, orthogonal polynomials (Chebyshev, Hermite, Legendre, etc.), Gaussian functions, and Fourier series. An important application of function approximation is the numerical approximation of integrals that do not have closed forms. The multivariate normal complementary integral is one of the significant integrals that appear in many engineering and statistics computations. A lot of research was investigated to approximate it, however no unique approximation is used for all dimensions and a predefined accuracy. The integral is defined by: L(h, Σ) = √

1

∫∞

∫∞

(2π)d |Σ| h1

···

{ } 1 exp − xT Σ−1 x dx, 2

hd

where x = (x1 , x2 , · · · xd ), and Σ is an d × d symmetric positive definite covariance matrix. The problem has received considerable attention in literature [13, 14, 16]. For d = 2, there are some series expressions [8, 18, 20] and efficient numerical techniques [4–6, 11]. For the multivariate case, d > 2, there exist several powerful numerical methods based on multivariate integration techniques that rely on ordinary Monte-Carlo methods, along with some common variance reduction techniques [2, 3]. Another group of algorithms is based on computing upper 1

Department of Engineering Mathematics and Physics, Cairo University, 12613, Cairo, Egypt & University of Science and Technology, Zewail City of Science and Technology, 12588, Cairo, Egypt e-mail: h [email protected], [email protected] 2 Department of Computer Engineering, Cairo University, 12613, Cairo University, 12613, Cairo, Egypt e-mail: [email protected] 3 University of Science and Technology, Zewail City of Science and Technology, 12588, Cairo, Egypt e-mail: [email protected] Manuscript received 12 November 2014. 194

H. FAYED et al.: FITTING A RECTANGULAR FUNCTION BY GAUSSIANS ...

195

and lower bounds on the probability (see [10, 12] for a survey of these methods). Recently, Miwa [17] developed an algorithm that evaluates the multiple integral by transforming it into recursive evaluation of a one-dimensional integration over a fine grid of points. This method is considered among the most efficient methods for d ≤ 10. Fayed and Atiya [9] derived a series expansion based on Fourier series that is more efficient up to d = 7. In most studies, the case when the components of x are equicorrelated, that is ρij = ρ for all i ̸= jand ρii = 1, is often used as a benchmark. This case can be evaluated as ( [23], p. 192): 1 L(h, Σ) = √ 2π

∫∞ −∞

t2 ∏ exp(− ) Φ 2 d

i=1

(

√ ) −hi + t ρ √ dt. 1−ρ

2. Gaussian mixture models as an approximation One of the approaches to approximate a function by a linear combination of Gaussian is to sample the function, and then use the radial basis function networks (RBF networks) where the radial basis functions are chosen to be Gaussian functions [15]. However, to achieve a good accuracy in approximating a rectangular function, a large network is often required. Another approach is to use the method proposed in [1], however this method failed to obtain good accuracy for the rectangular function due to the abrupt change at the beginning and end of the function. Nonlinear regression is also another alternative, but it becomes considerably slow if the number of components is moderately large, and it requires to bounhts to avoid very small/large values that often deteriorate the approximation in the multivariate case. So we proposed the combination weig a simple method that circumvents the above problems leading to a good approximation to a rectangular function. Let us consider the one-dimensional problem for approximating the following rectangular function: { R(x, T ) =

1 0≤x≤T 0 otherwise.

The interval [0, T ] is sampled uniformly to generate M data points. These data points are used to fit the rectangular function by the following Gaussian mixture model: R(x, T ) ∼ =T

K ∑

ωi p (x|λi ),

i=1

{ } where K is the number of mixture components, λi = µi , σi2 , p (x|λi ) ∼ N (µi , σi2 ) is the normal distribution with mean µi and variance σi2 , and ωi is the component weight in the mixture. So, for a predetermined number of mixtures (K), the mixture’s parameters can be estimated iteratively using the expectation-maximization (EM) algorithm [7, 22]. { as follows } Let X = {xm ∈ R; m = 1, · · · , M } be the sample sequence, and θi = ωi , µi , σi2 , i = 1, · · · , K denotes, respectively, the component weight, the mean and variance of the i−th normal component. To find the optimum values of the normal components using the EM algorithm, maximization of the following likelihood function is performed: f (Θ) =

M ∑ m=1

ln {p (xm , im |Θ)} =

M ∑

ln {p (xm |im , Θ) Pim },

m=1

where Θ = {θ1 , · · · , θK }, im ∈ {1, · · · , K} denotes that xm was generated from component i. At each iteration j of EM algorithm, two steps are performed: the expectation step (E-step), and the maximization step (M-step) as described below.

196

APPL. COMPUT. MATH., V.14, N.2, 2015

E-step: Taking the expectation of f (Θ) based on the current estimate Θj−1 , { Q (Θ, Θj−1 ) = E

} ln {p (xm |im , Θj−1 ) Pim } =

M ∑

m=1

=

M ∑ K ∑

P (im |xm , Θj−1 ) ln {p (xm |im , Θj−1 ) Pim },

m=1 im =1

where Q (Θ, Θj−1 ) is a function of Θ, assuming that Θj−1 is fixed. The notation can now be simplified by dropping the index m from im . This is because, for each m, we sum up over all possible i values of im , and these are the same for all m. Note also that Pi becomes simply the component weight ωi . However, for GMM we have: } { 1 (xm − µi )2 p (xm |i, Θj−1 ) = √( . )d exp − 2σi2 2πσi2 So } ) 1 d ( 2 2 Q (Θ, Θj−1 ) = P (i|xm , Θj−1 ) − ln 2πσi − 2 (xm − µi ) + ln ωi . 2 2σi m=1 i=1 {

M ∑ K ∑

M-step: By maximizing the Q (Θ, Θj−1 ) with respect to ωi , µi , σi2 , we get: M 1 ∑ = P (i|xm , Θj−1 ), M

(j) ωi

m=1

M ∑ (j) µi

=

P (i|xm , Θj−1 ) xm

m=1 M ∑

, P (i|xm , Θj−1 )

m=1 M ∑ (j)

σi2 =

m=1

( ) (j) 2 P (i|xm , Θj−1 ) xm − µi M ∑

, P (i|xm , Θj−1 )

m=1 (j−1)

ω P (xm |i, Θj−1 ) . P (i|xm , Θj−1 ) = K i ∑ (j−1) ωk P (xm |k, Θj−1 ) k=1

Fig. 1 shows the results of fitting R(x, 4) using a sample step of 0.001, and different values of K. The shown results are the best of 100 runs with different random initialization that led to the minimum mean absolute error: M 1 ∑ |1 − R(x, T )|. M AE = M m=1

A simple form can also be obtained by constraining the Gaussian functions to have the same variance σ 2 and equally spaced means; i.e. µi = µ0 + (i − 1) δ. Thereby, the traditional EM

H. FAYED et al.: FITTING A RECTANGULAR FUNCTION BY GAUSSIANS ...

197

algorithm is modified to obtain µ0 and δ from the following equations:   K ∑ M ∑ M (i − 1) P (i|xm , Θj−1 )  ( )  i=1 m=1   µ0 = K ∑ M K ∑ M ∑  δ ∑ (i − 1) P (i|xm , Θj−1 ) (i − 1)2 P (i|xm , Θj−1 ) i=1 i=1 m=1  m=1  M ∑ xm   m=1  = K ∑ M ∑  (i − 1) P (i|xm , Θj−1 ) xm i=1 m=1

and the variance formula is reduced to: (j)

σ2 =

( ) 1 ∑∑ (j) 2 P (i|xm , Θj−1 ) xm − µi . Md K

M

i=1 m=1

Fig. 2 shows the results of fitting R(x, 4) using a sample step of 0.001 and different values of K using these constraints.

3. The normal integrals Suppose that it is required to approximate the normal integral: ( 2) ∫∞ 1 x I(h) = √ exp − dx. 2 2π h

It can be approximated using the rectangular function R (x − h, T ) as: 1 I(h) ∼ =√ 2π or

∫∞ −∞

( 2) x R (x − h, T ) · exp − dx, 2

( 2) ∫∞ K ∑ ( ) T x 2 ∼ I(h) = √ ωi N µi + h, σi · exp − dx, 2 2π i=1 −∞

which can simply be obtained as:

{ } K 2 ∑ ω T (µ + h) i ) , √ i I(h) ∼ exp − ( =√ 2 1 + σi2 2π i=1 1 + σ 2 i ( 2) where T is chosen such that the Gaussian function √12π exp − T2 ∼ = 0 (for h ≥ 0, it is reasonable to choose T = 4). For the multivariate case, suppose that it is required to approximate: { } ∫∞ ∫∞ 1 T −1 1 · · · exp − x Σ x dx. L(h, Σ) = √ 2 (2π)d |Σ| h1 hd One way to approximate this multiple integral is to sample data in d-dimensional space, and apply EM for the sampled data as before. However this approach led to poor results as the efficiency of EM degrades as both the number of samples and the number of Gaussian functions increase. Alternatively, it was approximated along each dimension separately as:

198

APPL. COMPUT. MATH., V.14, N.2, 2015

1.2

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0 −1

0 0

1

2

3

4

5

−1

0

(a) K = 5, M AE = 0.113

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0 0

1

2

3

4

5

−1

0

(c) K = 10, M AE = 0.049

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0 0

1

2

3

(e) K = 30, M AE = 0.024

3

4

5

1

2

3

4

5

4

5

(d) K = 20, M AE = 0.026

1.2

−1

2

(b) K = 6, M AE = 0.094

1.2

−1

1

4

5

−1

0

1

2

3

(f) K = 50, M AE = 0.019

Figure 1: Results of fitting R(x, 4) (solid line) using the general form of GMM (dashed line).

H. FAYED et al.: FITTING A RECTANGULAR FUNCTION BY GAUSSIANS ...

1.2

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0 −1

0 0

1

2

3

4

5

−1

0

(g) K = 5, M AE = 0.128

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0 0

1

2

3

4

5

−1

0

(i) K = 10, M AE = 0.085

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0 0

1

2

3

(k) K = 30, M AE = 0.045

2

3

4

5

1

2

3

4

5

4

5

(j) K = 20, M AE = 0.057

1.2

−1

1

(h) K = 6, M AE = 0.115

1.2

−1

199

4

5

−1

0

1

2

3

(l) K = 50, M AE = 0.033

Figure 2: Results of fitting R(x, 4) (solid line) using the simple form of GMM (dashed line).

200

APPL. COMPUT. MATH., V.14, N.2, 2015

L(h, Σ) ∼ =

K K ∫∞ ∫∞ ∑ ∑ Td · · · · · · ωi1 (2π)d |Σ| −∞ id =1 { 1 T −1−∞} i1 =1 · exp − 2 x Σ x dx



· · · ωid N (µi + h, Σi )

which can be evaluated as [21]: d

T L(h, Σ) ∼ =√ (2π)d

K ∑ i1 =1

···

K ∑ id

exp (αi ) ωi1 · · · ωid √ , |Σ + Σi | =1

where ( −1 )−1 −1 1 1 αi = (µi + h)T Σ−1 Σ + Σ−1 Σi (µi + h) − (µi + h)T Σ−1 i i i (µi + h) , 2 2  2      σi1 0 · · · 0 µi1 h1  0 σ2 · · · 0  i2  ..    ..   h =  .  , µi =  .  , Σi =  .. . .. . .  . . 0 . µid hd 0 0 · · · σi2d This form, like all other existing algorithms used in approximating the normal integral, suffers from the curse of dimensionality. However, to attain a simple expression that can speed it up significantly, we used the simple form described above; that is, the Gaussian functions are constrained to have the same variance σ 2 and µil = µ0 + (il − 1) δ, 1 ≤ l ≤ d. Hence the integral can be approximated by: L(h, Σ) ∼ =√

Td d

(2π) |Σ +

K ∑

σ 2 I| i1 =1

···

K ∑

{ ωi1 · · · ωid exp

id =1

} −1 2 ∥C (µi + h)∥2 , 2σ 2

where C is an upper triangular matrix obtained from Cholesky decomposition of the matrix [ ( )−1 ] 2 I −Σ Σ+σ I . In this way, the computational complexity can be reduced to be of order O(d2 K d ) flops.

4. Experimental results The orthant probabilities, L(0, Σ) is evaluated using the proposed method (GMM) for 7 ≤ d ≤ 10, and is compared with Miwa’s algorithm (available at: http://mvtnorm.r-forge.r-project.org). As a benchmark, we used the Gauss-Kronrod (7, 15) pair quadrature integration method for the equicorrelated case [23]. The probabilities are evaluated for ρ ∈ {0.1, 0.2 · · · , 0.9}. For GMM, T = 4 is used to sample the rectangular function R(x, T ), and K = 5 and K = 6 are investigated. For Miwa’s algorithm, grid sizes examined are G = 8 and G = 16 (which have comparable running times with the proposed approach). We used C language in our implementation on Windows 7 running on Pentium 2.4GHz PC with 3GB RAM. The absolute error and the elapsed time are reported in Table 1 to Table 4. It can be noticed that GMM is comparable to Miwa’s algorithm in accuracy, especially for ρ ≤ 0.7, while considerably faster for d ≥ 8. Moreover, as the dimension increases, processing of Miwa’s algorithm becomes much slower than GMM. Thus, for 7 ≤ d ≤ 10, when low accuracy is required, GMM becomes a reasonable choice, however, if high accuracy is needed, Miwa’s algorithm is recommended.

H. FAYED et al.: FITTING A RECTANGULAR FUNCTION BY GAUSSIANS ...

201

5. Conclusions In this paper, an approximation of a rectangular function is obtained by adapting the wellknown Gaussian mixture models to express it as a linear combination of Gaussian functions. The proposed approximation is used to derive an approximate expression for the multivariate normal integral. The obtained expression is found to be fast if an error of order of 10−4 is plausible, especially for d ≥ 7 and ρ ≤ 0.7. Moreover, for d ≥ 10, it is considerably faster and thus more appropriate and feasible than Miwa’s algorithm. In future work, exploring other optimization strategies than the EM algorithm may be investigated, as it may yield further improvements [19]. Table 1. Results of the orthant probabilities for d = 7 Error

Elapsed Time

ρ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Miwa

GMM

G = 8 G = 16 K = 5 K = 6 0.070 0.094 0.010 0.022 0.071 0.094 0.010 0.022 0.071 0.094 0.010 0.022 0.072 0.094 0.010 0.022 0.072 0.094 0.011 0.023 0.071 0.094 0.011 0.023 0.072 0.094 0.011 0.023 0.072 0.094 0.011 0.023 0.072 0.094 0.011 0.023

Miwa

G=8 5.13E-04 4.42E-04 4.07E-04 4.01E-04 3.98E-04 3.57E-04 3.35E-04 3.55E-04 2.48E-04

GMM

G = 16 3.16E-05 2.77E-05 2.50E-05 2.32E-05 2.11E-05 1.84E-05 1.69E-05 1.54E-05 1.20E-05

K=5 1.76E-04 1.77E-04 7.85E-05 1.70E-04 6.44E-04 1.47E-03 2.91E-03 4.97E-03 8.35E-03

K=6 1.13E-04 1.34E-04 1.05E-04 1.52E-05 2.70E-04 7.16E-04 1.35E-03 1.69E-03 6.46E-04

Table 2. Results of the orthant probabilities for d = 8 Error

Elapsed Time

ρ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Miwa

GMM

G = 8 G = 16 K = 5 K = 6 0.515 0.764 0.078 0.156 0.468 0.764 0.062 0.281 0.562 0.874 0.062 0.218 0.437 0.764 0.062 0.265 0.577 0.889 0.078 0.250 0.468 0.749 0.079 0.172 0.468 0.920 0.077 0.156 0.577 0.920 0.076 0.162 0.468 0.827 0.000 0.131

Miwa

G=8 9.22E-04 9.43E-04 1.00E-03 1.09E-03 1.19E-03 1.25E-03 1.33E-03 1.47E-03 1.48E-03

GMM

G = 16 5.74E-05 6.10E-05 6.52E-05 6.97E-05 7.36E-05 7.68E-05 8.15E-05 8.69E-05 9.05E-05

K=5 1.23E-04 1.25E-04 2.79E-05 2.36E-04 7.54E-04 1.67E-03 3.13E-03 5.58E-03 9.85E-03

K=6 8.04E-05 1.06E-04 8.19E-05 3.87E-05 3.06E-04 7.60E-04 1.33E-03 1.64E-03 2.94E-04

Table 3. Results of the orthant probabilities for d = 9 Error

Elapsed Time

ρ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Miwa

GMM

G = 8 G = 16 K = 5 K = 6 4.243 7.691 0.234 0.920 4.321 7.675 0.312 1.295 4.540 7.613 0.374 1.638 4.805 7.862 0.265 1.404 4.477 7.956 0.343 1.482 4.306 7.753 0.281 0.967 4.508 8.174 0.140 0.655 4.633 8.876 0.194 0.728 4.368 8.174 0.147 0.640

Miwa

G=8 1.40E-03 1.66E-03 1.96E-03 2.32E-03 2.70E-03 3.06E-03 3.48E-03 4.01E-03 4.49E-03

G = 16 8.61E-05 1.08E-04 1.30E-04 1.53E-04 1.76E-04 2.00E-04 2.26E-04 2.57E-04 2.91E-04

GMM

K=5 8.65E-05 9.11E-05 2.84E-06 2.74E-04 8.18E-04 1.79E-03 3.30E-03 6.05E-03 1.10E-02

K=6 5.87E-05 8.67E-05 7.07E-05 4.44E-05 3.12E-04 7.34E-04 1.25E-03 1.41E-03 3.51E-05

202

APPL. COMPUT. MATH., V.14, N.2, 2015

Table 4. Results of the orthant probabilities for d = 10 Error

Elapsed Time

ρ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Miwa

GMM

G = 8 G = 16 K = 5 43.306 76.316 2.668 44.242 74.740 3.744 43.087 74.678 4.150 43.353 78.047 4.181 46.597 76.331 3.463 40.794 74.896 2.824 41.871 76.269 1.716 43.743 78.890 1.796 42.666 78.812 1.690

K=6 15.366 21.637 24.633 23.088 19.594 13.073 12.363 13.323 11.498

Miwa

G=8 1.91E-03 2.59E-03 3.38E-03 4.28E-03 5.27E-03 6.30E-03 7.47E-03 8.91E-03 1.05E-02

G = 16 1.15E-04 1.67E-04 2.24E-04 2.86E-04 3.51E-04 4.20E-04 4.98E-04 5.90E-04 7.02E-04

GMM

K=5 6.24E-05 6.89E-05 1.83E-05 2.88E-04 8.37E-04 1.79E-03 3.30E-03 6.24E-03 1.17E-02

K=6 4.41E-05 7.51E-05 6.92E-05 3.28E-05 2.82E-04 6.48E-04 1.07E-03 1.01E-03 7.36E-05

References [1] Calcaterra, C. Linear combination of Gaussians with a single variance are dense in L2, Proceedings of the World Congress on Engineering WCE, V.2, 2008. [2] De´ ak, I. Three digit accurate multiple normal probabilities, Numer. Math., V.35, 1980, pp.369-380. [3] De´ ak, I. Random Number Generators and Simulation, Akad´emiai Kiad´ o, 1990. [4] Divgi, D.R. Calculation of univariate and bivariate normal probability functions, Ann. Stat., V.7, N.4, 1979, pp.903-910. [5] Drezner, Z. Computation of the bivariate normal integral, Math. Comput., V.32, 1978, pp.277-279. [6] Drezner, Z., Wesolowsky, G.O. The computation of the bivariate normal integral, J. Stat. Comput. Simul., V. 35, 1990, pp.101-107. [7] Duda, R.O., Hart, P.E., Stork, D.G. Pattern Classification, 2nd ed., Wiley, New York, 2001. [8] Fayed, H.A., Atiya, A.F. An evaluation of the integral of the product of the error function and the normal probability density, with application to the bivariate normal integral, Math. Comput., V.83, N.285, 2014, pp.235-250. [9] Fayed, H.A., Atiya, A.F. A novel series expansion for the multivariate normal probability integrals based on fourier series, Math. Comput., V.83, N.289, 2014, pp.2385-2402. [10] Gassmann, H. Multivariate normal probabilities: Implementing an old idea of Plackett’s, J. Comp. Graph. Stat., V.12, N.3, 2003, pp.731-752. [11] Genz, A. Numerical computation of rectangular bivariate and trivariate normal and t probabilities, Stat. Comput., V.14, N.3, 2004, pp.251-260. [12] Genz, A. Comparison of methods for the computation of multivariate normal probabilities, Comp. Sci. Stat., V.25, 1993, pp.400-405. [13] Gupta, S.S. Probability integrals of multivariate normal and multivariate t, Ann. Math. Statist., V.34, 1963, pp.792-828. [14] Harris, B., Soms,A.P. The use of the tetrachoric series for evaluating multivariate normal probabilities, J. Multivariate Anal., V.10, 1980, pp.252-267. [15] Haykin, S. Neural Networks: a Comprehensive Foundation, 2nd ed., Prentice-Hall, 1999. [16] Kendall, M.G. Proof of relations connected with the tetrachoric series and its generalizations, Biometrika, V.32, 1941, pp.196-198. [17] Miwa, T., Hayter, A.J., Kuriki, S. The evaluation of general non-centred orthant probabilities, J. R. Statist. Soc. B, V.65, 2003, pp.223-234. [18] Owen, D.B. Tables for computing bivariate normal probabilities, Ann. Math. Stat., V.27, N.4, 1956, pp.10751090. [19] Pardalos, M.P. and Chinchuluun, A. Some recent developments in deterministic global optimization (survey), Appl. Comput. Math., V.5, N.1, 2006, pp.16-34. [20] Pearson, K. Mathematical contributions to the theory of evolution. VII. on the correlation of characters not quantitatively, Philos. Trans. R. Soc. S-A, V.196, 1901, pp.1-47. [21] Petersen, K.B. and Pedersen, M.S. The matrix cookbook, Nov 2012, version 20121115. [Online]. Available: http://www2.imm.dtu.dk/pubdb/views/publication details.php?id=3274 [22] Theodoridis, S., Koutroumbas, K. Pattern Recognition, 2nd ed., Elsevier, New York, 2003. [23] Tong, Y.L. The Multivariate Normal Distribution, Springer Series in Statistics, Springer-Verlag, New York, 1990.

H. FAYED et al.: FITTING A RECTANGULAR FUNCTION BY GAUSSIANS ...

203

Hatem A. Fayed - is an Associate Professor at the Engineering Mathematics and Physics Department, Cairo University, and currently a seconded Associate Professor at Zewail City of Science and Technology. He received his Ph.D. from the Engineering Mathematics and Physics Department, Cairo University, 2005. His research interests are in the areas of machine learning, time series forecasting, neural networks, optimization techniques, and image ptrocessing.

Amir F. Atiya - received his B.S. and M.S. degrees from Cairo University, and his M.S. and Ph.D. degrees from Caltech, Pasadena, CA, all in electrical engineering. Dr. Atiya is currently a Professor at the Department of Computer Engineering, Cairo University. His research interests are in the areas of machine learning, theory of forecasting, computational finance, dynamic pricing, and Monte Carlo Methods.

Ashraf H. Badawi - is a Dean of Student Affairs, Assistant Professor at the Center of Nanotechnology at Zewail City. He is also the Director for the Learning Center of Learning Technologies. Prior to joining SMART Ashraf was the lead WiMAX Solutions Specialists for Intel in the Middle East and Africa. He was an assistant professor, Cairo University, Engineering Math and Physics department from 2002 till 2009. He graduated from the Systems and Biomedical Engineering Department in 1990 in Cairo, where he started pursuing his M.Sc. in Engineering Physics. He then traveled to Winnipeg, Canada to pursue his PhD in Electrical Engineering from the University of Manitoba.

Suggest Documents