A Robust Numerical Method for the Random Interface ... - Springer Link

0 downloads 0 Views 897KB Size Report
D1 = {(x1, x2) | 0 ≤ x1 ≤ , x2 > b} and D2 = {(x1, x2) | 0 ≤ x1 ≤ , x2 < b} ... 1. 2 (j ) is given by. (Tj f )(x1) = ∑ n∈Z iβ. (n) j f. (n) eiαn x1 , f. (n) = (1/ ). ∫. 0 ... ∞(D0), then the weak formulation (8)–(10) of the deterministic interface grating problem ... κ(x,ω)κ(y, ω)dP(ω), x,y ∈ 0,. Covarκ (x,y) = Corκ (x,y) Eκ (x)Eκ (y), x,y ∈ 0,.
J Sci Comput https://doi.org/10.1007/s10915-018-0712-z

A Robust Numerical Method for the Random Interface Grating Problem via Shape Calculus, Weak Galerkin Method, and Low-Rank Approximation Gang Bao1,2 · Yanzhao Cao3 · Yongle Hao1 · Kai Zhang1

Received: 29 June 2017 / Revised: 19 February 2018 / Accepted: 13 April 2018 © Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract We present an efficient numerical algorithm to solve random interface grating problems based on a combination of shape derivatives, the weak Galerkin method, and a low-rank approximation technique. By using the asymptotic perturbation approach via shape derivative, we estimate the expectation and the variance of the random solution in terms of the magnitude of the perturbation. To effectively capture the severe oscillations of the random solution with high resolution near the interface, we use weak Galerkin method to solve the Helmholtz equation related to the grating interface problem at each realization. To effectively compute the variance operator, we use an efficient low-rank approximation method based on a pivoted Cholesky decomposition to compute the two-point correlation function. Two numerical experiments are conducted to demonstrate the efficiency of our algorithm. Keywords Random interface grating problem · Weak Galerkin · Low-rank approximation Mathematics Subject Classification 78A45 · 65N30 · 65C05

1 Introduction Diffraction gratings are the optic elements with periodic structures which consist of two or more media connected through interfaces. Ever since Wood found “a remarkable case of uneven distribution of light in a diffraction grating spectrum” [45], many mathematical models such as variational method [33], improved point matching method [28], integral method [46], differential method [36,37], and so on, were proposed to describe this phenomena. Among these remarkable methods, electromagnetic wave diffraction governed by the

B

Kai Zhang [email protected]

1

Department of Mathematics, Jilin University, Changchun, Jilin, People’s Republic of China

2

Department of Mathematics, Zhejiang University, Hangzhou, Zhejiang, People’s Republic of China

3

Department of Mathematics and Statistics, Auburn University, Auburn, USA

123

J Sci Comput

Maxwell’s equation in the diffraction grating region has become an efficient and popular tool nowadays [38]. The rigorous computation of the time-harmonic Maxwell equation diffracted by one dimensional periodic material can be reduced to the two dimensional Helmholtz equation with associated boundary conditions. From the mathematical point of view, there are two general categories of diffractive grating problems: the direct problem and the inverse problem. Given the specified incident wave, the direct problem is to determine the diffracted field from the given smooth grating interfaces, while the inverse problem is to compute the grating interfaces from the far/near field of the diffracted field. The well-posedness and various numerical algorithms for deterministic direct problem have been studied by many researchers [5,6,8,10,38], and we refer to [1,2,9,11,19,20,25,30,39] and references therein for the theoretical and computational results of deterministic grating profile reconstruction. As mentioned above, most of the research assumes that the interface between different media is a smooth perturbation of a reference interface. However, in practical applications such as the instruments in nanoscales, the perturbation is not perfectly smooth, which may result in an increase in the discrepancies between the product design and the actual product. In this paper we assume that this perturbation is a random field. Though the variations of the interface may not be truly random, it is possible that the non-smooth perturbation can be more accurately modeled by a random field. Such an approach has been shown to be effective in handling elliptic interface problems [14]. To solve the grating problems with random interface, one needs to solve the random Helmholtz equations. In the past decades, there have been many studies in numerical methods for random partial differential equations (RPDEs), including random Galerkin finite element methods, general polynomial chaos (gPCs) methods, and sparse grid methods have been developed for RPDEs with random coefficients, random source terms, random boundaries, or random interfaces [4,13–15,22,24,27,47]. One of the challenges of solving grating problems with random interfaces is that the solutions of the Helmholtz equations tend to oscillate near the interface. Our numerical experiments show that the standard Galerkin finite element methods are ineffective in capturing such oscillations. In this paper we adopt the weak Galerkin methods (WGMs) to resolve this issue. The use of discontinuous basis with boundary information makes it possible that WGM can capture the oscillations on the interface boundary more accurately [34,43,44,48]. The main task of this paper is to accurately and efficiently approximate the first and second moments of the optical wave under a small perturbation of the reference Assuming that the random interface is a small random perturbation of a reference interface, we can obtain the approximation of first moment, i.e., the expectation, by the optical wave at the reference interface with second order accuracy. On the other hand, the second moment can be approximated by finding the shape derivative in the tensor product space, which is equivalent to solving the Helmholtz equation in the tensor product space. Solving the Helmholtz equation in the tensor product space can be very computationally intensive because of the dimension of the product space is doubled. To overcome this obstacle, we use the low-rank approximation via pivoted Cholesky decomposition [22,23]. With such a low-rank approximation, the overall complexity is comparable to that of solving a non-tensor product problem. We mention that this method is also easier to implement than that of the sparse tensor product method [24,40]. The rest of this paper is organized as follows: In Sect. 2, we present the preliminaries, including relevant definitions, notations, and the deterministic interface grating problem. In Sect. 3, we describe the random interface grating problem, the shape derivatives, and the expectation and two-point correlation function of interface in. Also, we derive the governing equations of the shape derivative based on shape calculus later in this section. In Sect. 4, we present the approximation results of expectation and variance of the random interface grating

123

J Sci Comput Fig. 1 The geometry of grating problem

problem via the shape Taylor expansion, where the tensorized counterpart for the two-point correlation of random solution is presented. In Sect. 5, we present the weak Galerkin method for the deterministic interface grating problem or each realization of the random interface grating problem. An efficient low-rank approximation is proposed to compute the two-point correlation function of the random interface. Section 6 provides numerical experiment results which demonstrate the efficiency of our algorithms. The last section is devoted to some concluding remarks.

2 Deterministic Interface Grating Problem In this section, we first specify the geometry of deterministic grating problem. Then we formulate of the Helmholtz equation for the optical wave propagation and the associated boundary as well as the corresponding variational problem. The geometry of grating problem is sketched in Fig. 1, where the grating interface 0 is imbedded in the truncated periodic grating domain D0 := {(x1 , x2 ) | 0 ≤ x1 ≤ , −b < x2 < b, b > 0}, with boundaries i given by 1 = {0 ≤ x1 ≤ , x2 = b}, 2 = {0 ≤ x1 ≤ , x2 = −b}, 3 = {x1 = 0, −b ≤ x2 ≤ b}, 4 = {x1 = , −b ≤ x2 ≤ b}. Assume that the scattering region and the substrate region are given by D1 = {(x1 , x2 ) | 0 ≤ x1 ≤ , x2 > b} and D2 = {(x1 , x2 ) | 0 ≤ x1 ≤ , x2 < −b} respectively.

123

J Sci Comput

Set the wavenumber k as

⎧ ⎨ k1 , k(x1 , x2 ) = k0 (x1 , x2 ), ⎩ k2 ,

for (x1 , x2 ) ∈ D1 , for (x1 , x2 ) ∈ D0 , for (x1 , x2 ) ∈ D2 ,

where k j ∈ C with Re(k j ) > 0 and Im(k j ) > 0 for j = 0, 1, 2. Let u I = eiαx1 −iβ1 x2 be an incoming plane wave from the top 1 with α = k1 sin θ and β1 = k1 cos θ , where θ ∈ (0, π/2) is the incident angle. It has been shown in [7,18] that the diffraction of a timeharmonic electromagnetic wave in the transverse electric (TE) polarization can be reduced to the two-dimensional Helmholtz equation (denote by u the solution of Helmholtz equation). Then the corresponding quasi-periodic solution u α := ue−iαx1 satisfies



 α + k02 u α = 0, in D0

with α =  + 2iα∂x1 − α 2 . It is reasonable to impose the periodic boundary conditions on 3 and 4 upon u α . In order to set suitable boundary conditions on 1 and 2 , we introduce the following Dirichlet to Neumann (DtN) operators: 1

1

1

Definition 2.1 Let f ∈ H 2 ( j ) for j = 1, 2. The DtN operators T j : H 2 ( j ) → H − 2 ( j ) is given by  (n) (T j f )(x1 ) = iβ j f (n) eiαn x1 , f

(n)

n∈Z  

= (1/)

f (x1 )ex p(−iαn x1 )d x1 ,

0

where αn = (2πn)/, and for real k j ⎧

1 ⎪ ⎨ k 2 − (α + αn )2 2 , j (n) (n) β j = β j (α) = 1

⎪ ⎩ i (α + α )2 − k 2 2 , n j

k 2j > (αn + α)2 , k 2j < (αn + α)2 .

(0)

It is obvious that β j = β j , j = 1, 2. We assume that k 2j  = (α + αn )2 for all n ∈ Z , j = 1, 2 to exclude “resonance”. Based on the DtN operators T j , the quasi-periodic solution u α satisfies the transparent boundary conditions on 1 and 2 (cf. [7,18]). Then, the deterministic interface grating problem is given by

123

L u α = 0,

in D0 ,

(1)

[u α ] = 0,

∂u α = 0, ∂n ∂u α = T1 (u α ) − 2iβ1 e−iβ1 b , ∂n ∂u α = T2 (u α ), ∂n

on 0 ,

(2)

on 0 ,

(3)

on 1 ,

(4)

on 2 ,

(5)

J Sci Comput

u α (, x2 ) = u α (0, x2 ), ∂u α ∂u α (, x2 ) = − (0, x2 ), ∂n ∂n

on 3 and 4 ,

(6)

on 3 and 4 ,

(7)

where the operator L = (α + k02 ), [v] is the jump of v across 0 , and n is the unit normal vector of 0 pointing to D2 , or the unit normal vector of 1 and 2 pointing outwards. Define the Hilbert space   H p1 (D0 ) = v ∈ H 1 (D0 ), v(0, x2 ) = v(, x2 ), −b ≤ x2 ≤ b . Then, the variational formulation of the deterministic interface grating problem (1)–(7) is: Find u α ∈ H p1 (D0 ) such that a1 (u α , v) = f 1 (v), ∀ v ∈ H p1 (D0 ), where

(8)

   ∂u a1 (u, v) = − ∇u · ∇vdx + (k02 − α 2 )uvdx + 2iα vdx ∂ D0 D0 D0 x 1   (T1 u)vds + (T2 u)vds, + 1 2  e−iβ1 b vds. f 1 (v) = 2iβ1 1

(9) (10)

Remark 2.2 Since the jump conditions in (2)–(3) vanish, the deterministic interface grating problem (1)–(7) is equivalent to the deterministic grating problem in [7]. Assume that k0 ∈ L ∞ (D0 ), then the weak formulation (8)–(10) of the deterministic interface grating problem admits a unique solution u α ∈ H p1 (D0 ) for all frequencies except possibly a discrete set of k0 (cf. [18]). For the simplicity of notation, from now on we shall drop the subscript α in the notation u α for the quasi-periodic solution.

3 Stochastic Interface Grating Problem 3.1 Stochastic Interfaces and the Model Problem Given a probability space ( , , P). Let   A := v ∈ C 2,1 (0 , R2 ) : v C 2,1 (0 ,R2 ) ≤ 1 , and the random field κ ∈ L 2 ( , C 2,1 (0 , R2 )) with ω → κ(·, ω) ∈ A almost surely for ω ∈ , where C 2,1 (0 , R2 ) is canonically the set of twice differentiable mappings with Lipschitz continuous derivatives (cf. [17]). Given any perturbation δ with 0 ≤ δ < δ0 , the random interface can be quantified as δ (ω) := {x + δκ(x, ω)n(x) : x ∈ 0 } .

(11)

It is easy to see that the random interface δ (ω) degenerates to the nominal surface 0 as δ → 0.

123

J Sci Comput

Since κ ∈ L 2 ( , C 2,1 (0 , R2 )), we can define the mean, two-point correlation function, and covariance of κ(x, ω) as  Eκ (x) = E(κ(x, ω)) = κ(x, ω) d P(ω), x ∈ 0 ,

 Cor κ (x, y) = E(κ(x, ω)κ(y, ω)) = κ(x, ω)κ(y, ω) d P(ω), x, y ∈ 0 ,

Covar κ (x, y) = Cor κ (x, y) − Eκ (x)Eκ (y),

x, y ∈ 0 ,

where E denotes the expectation with respect to the probability measure P. Without loss of generality, we may assume that Eκ (x) = 0. (12) Otherwise, we may readjust the nominal interface such that (12) holds. Combining (11) and (12) yields E(δ (ω)) := {x + δEκ (x)n(x) : x ∈ } = 0 and Covar κ (x, y) = Cor κ (x, y).

(13)

The random interface grating problem is then defined as in (Dδ− (ω) ∪ Dδ+ (ω)) × ,

L u(x, ω) = 0,

on δ (ω) × ,

[u(x, ω)] = 0,

∂u (x, ω) = 0, ∂n ∂u (x, ω) = T1 (u(x, ω)) − 2iβ1 e−iβ1 b , ∂n ∂u (x, ω) = T2 (u(x, ω)), ∂n u(, x2 , ω) = u(0, x2 , ω), ∂u ∂u (, x2 , ω) = − (0, x2 , ω), ∂n ∂n

on δ (ω) × , on 1 × , on 2 × , on 3 × and 4 × , on 3 × and 4 × ,

(14)

where D0 = Dδ− (ω) ∪ Dδ+ (ω) with the random interface δ (ω) as shown in Fig. 2.

3.2 Deterministic PDEs of Shape Derivatives To derive the deterministic equations of shape derivatives, first we need to introduce some concepts and lemmas of the shape calculus for deterministic case. Let the real number T > 0 and the map V : [0, T ] × R2 → R2 be given. Then the map V can be viewed as a time-dependent velocity field {V(t) : 0 ≤ t ≤ T } defined on R2 by x → V(t)(x) := V(t, x) : R2 → R2 . We have the following formula for boundary integrals: Lemma 3.1 [17] Let  be the boundary of a bounded open subset D of R2 of class C 2 , ϕ 2 (R2 )), and the velocity field V ∈ C 0 ([0, τ ]; C 1 (R2 , R2 )). be an element of C 1 ([0, τ ]; Hloc loc Consider the function  J (t) :=

123

t

ϕ(t)dt ,

(15)

J Sci Comput Fig. 2 The geometry of random interface grating problem

where t = t (V) denotes the restriction of a velocity field V on . Then the derivative of J (t) with respect to t at t = 0 is given by  

 ∂ϕ d J (0) = + H (ϕ) V(0) · n d, (16) ϕ (0) + ∂n  where ϕ (0) := ϕ (0)(x) =

∂ϕ ∂t (0, x)

and H is the additive curvature of .

Next, we derive the deterministic equations of shape derivatives with respect to the nominal interface 0 via shape calculus. Here, we require 0 ∈ C 3,1 to ensure that the random interface satisfies δ (ω) ∈ C 2,1 (0 , R2 ) almost surely for ω ∈ . For this purpose, we shall define the shape derivatives by the pointwise limit (cf. [42]). Recall the deterministic interface grating problem (1)–(7) with respect to the nominal interface 0 and consider a deterministic interface grating problem with respect to the deterministic perturbed interface δ L u δ = 0,

[u δ ] = 0,

∂u δ = 0, ∂n ∂u δ = T1 (u δ ) − 2iβ1 e−iβ1 b , ∂n ∂u δ = T2 (u δ ), ∂n u δ (, x2 ) = u δ (0, x2 ), ∂u δ ∂u δ (, x2 ) = − (0, x2 ), ∂n ∂n

in D0 = Dδ− ∪ Dδ+ ,

(17)

on δ ,

(18)

on δ ,

(19)

on 1 ,

(20)

on 2 ,

(21)

on 3 and 4 ,

(22)

on 3 and 4 .

(23)

Here, the deterministic perturbed interface is given by δ = {x + δκ(x)n(x), x ∈ }

(24)

123

J Sci Comput

with κ(x) ∈ A and 0 ≤ δ ≤ δ0 . The domains Dδ− and Dδ+ are the subdomains of D0 separated by the deterministic perturbed interface δ . Similarly, we define the restriction of + u δ on Dδ− and Dδ+ by u − δ and u δ respectively. Definition 3.2 Let u and u δ the solutions of (1)–(7) and (17)–(23) respectively. The first order shape derivative of the deterministic interface grating problem is defined by     u δ (x) − u(x) , x ∈ D0− ∩ Dδ− ∪ D0+ ∩ Dδ+ . δ→0 δ

du := du[κ](x) = lim

(25)

For the definitions of higher order shape derivatives, dk u := dk u[κ1 , κ2 , . . . , κk ](x), we refer to [17,26] for more details.     Let u + := u  D + and u − := u  D − . Denote by u +  and u −  the trace of u on D0+ and 0

0

0

0

D0− respectively, and similar quantities are understood in the same manner. The following lemma presents the interface problem with respect to the nominal interface 0 for the first order shape derivative du.

Lemma 3.3 Assume that the deterministic perturbed interface in (24) satisfies δ ∈ ± 2 C 2,1 (0 , R2 ), u ± δ ∈ H (D0 ), and δ0 is sufficiently small to guarantee that the interface δ is not degenerate and lies inside the domain D0 . Then the first order shape derivative du ∈ H 1 (D0− ) ∪ H 1 (D0+ ) exists and is the solution of the following interface problem L du = 0,

in D0 = D0− ∪ D0+ ,

(26)

[du] = 0,

∂du = κ[k02 ]u, ∂n ∂du = T1 (du), ∂n ∂du = T2 (du), ∂n du(, x2 ) = du(0, x2 ), ∂du ∂du (, x2 ) = − (0, x2 ), ∂n ∂n

on 0 ,

(27)

on 0 ,

(28)

on 1 ,

(29)

on 2 ,

(30)

on 3 and 4 ,

(31)

on 3 and 4 .

(32)

Proof The proof follows similar process of [22,31], which we split into four steps. Step 1. Taking the difference of (20) and (4), dividing both sides by δ and letting δ → 0, and using the definition (25) and Definition 2.1, we can obtain the boundary condition (29) as follows   ∂u δ ∂du ∂u = lim − , δ→0 ∂n ∂n ∂n 1 = lim (T1 (u δ ) − T1 (u)) , δ→0 δ      (n)  1    1 iαn x1 iβ j = lim (u δ (x1 ) − u(x1 )) exp(−iαn x1 )d x1 e , δ→0 δ  0 n∈Z

= T1 (du). The boundary conditions (30)–(32) follow in a similar manner.

123

J Sci Comput

The remainder of the proof is devoted to deriving Eq. (26) and the jump conditions (27)– (28) on the nominal interface 0 . Step 2. Consider the variational formulation of (1)–(7) and (17)–(23). Taking the difference of the above two equations, dividing both sides by δ and letting δ → 0, and using the definition of (25) and Definition 2.1, integration by parts, we obtain 

 ∂du − ∂du + v dx + v dx D0− D0+ D0− ∂ x 1 D0+ ∂ x 1           − 2  + 2 ∂du + ∂du − + − vdx k0 − |α|2 du − vdx + k0 − |α|2 du + vdx − ∂n ∂n D− D0+ 0  0  ∂ − κ∇(u − − u + ) · ∇v dx + κ (u − − u + )v dx 0 0 ∂ x 1   

    2 2 κ k0− − |α|2 u − − k0+ − |α|2 u + v dx. (33) + du − v dx +



du + v dx +



0

Equation (26) follows from testing (33) with φ ∈ C0∞ (D0+ ) and φ ∈ C0∞ (D0− ) respectively [31]. Step 3. Testing Eq. (33) with a smooth function v ∈ C ∞ (D0 ) ⊂ H p1 (D0 ) on 0 (i.e.    v  = v +  = v −  ) and using Eq. (26), we derive 0



 0

0

0





∂ κ∇(u − u ) · ∇v dx + 2iα κ (u − − u + )v dx ∂ x1 0 0  

   2 2 uv dx. (34) κ|α|2 (u − − u + )v dx + κ k0− − k0+ +

∂du v dx = − ∂n



+

0

0

  The jump condition (2) on interface 0 implies ∇0 u = 0, where the surface gradient ∇0 is defined by ∇0 v = ∇v − (∇v · n)n. It follows from the jump condition (3) that [∇u] = 0.

(35)

Using the jump conditions (35) and (2), we reduce Eq. (34) to

 

   2 ∂du 2 uv dx, v dx = κ k0− − k0+ ∂n 0 0 which implies the jump condition (28) via the fundamental principle of variational calculus. Step 4. For any v ∈ C ∞ (D0 ), the jump condition (18) and Lemma 3.1 implies that   −  J (δ) := uδ − u+ δ v ds ≡ 0. δ

Taking the derivative of both sides of the above equation yields d J (δ) ≡ 0, which implies that d J (0) = 0.

(36)

Combining the formula (16), the definition of first order shape derivative (25), and identity (36), we obtain   −  du − du + v ds 0

123

J Sci Comput

       − ∂ u− − u+ v =− κ + H (u − u + )v ds ∂n 0

 ∂v ∂ ([u]) v + κ [u] κ + κ H ([u] v) ds, =− ∂n ∂n 0 

where the facts V(0) = κ(x)n(x) and V(0) · n(x) = κ(x) are used. It follows from the jump conditions (2) and (3) that    −  ∂u du − du + v ds = − κ v ds = 0, ∂n 0 0 which implies the jump condition (27) via the fundamental principle of variational calculus. This completes the proof.   If κ(x) in Definition 3.2 is replaced by κ(x, ω), we can obtain the first order shape derivative du(x, ω) := du[κ(ω)](x) of the random interface grating problem in the same manner. Applying the prove of Lemma 3.3 for any fixed realization κ(·, ω), ω ∈ in (28), we obtain the random version of Lemma 3.3 as follows: Theorem 3.4 Assume that the random perturbed interface satisfies δ (·) ∈ L 2 ( , C 2,1 (, R2 )), and δ0 is sufficiently small to guarantee that the random interface δ (ω) is not degenerate and lies inside the domain D0 . Then the first order shape derivative du ∈ L 2 ( , H 1 (D0− ) ∪ H 1 (D0+ )) exists and is the solution of the following interface problem L du(x, ω) = 0,

in D0 × ,

on 0 × , [du(x, ω)] = 0,

∂du(x, ω) = κ(x, ω)[k02 ]u(x), on 0 × , ∂n ∂du(x, ω) on 1 × , = T1 (du(x, ω)), ∂n ∂du(x, ω) = T2 (du(x, ω)), on 2 × , ∂n on 3 × and 4 × , du(, x2 , ω) = du(0, x2 , ω), ∂du ∂du on 3 × and 4 × , (37) (, x2 , ω) = − (0, x2 , ω), ∂n ∂n where the term u on the right hand side of third equation is the deterministic solution of (1)–(7).

4 Approximation of the Expectation and the Variance In this section, we consider approximations of the expectation and the variance , denoted by Eu (x) and Var u (x) respectively, of the solution of the random interface grating problem (14) in terms of the perturbation amplitude δ. First, we consider the expectation Eu (x). Let D(0 , δ0 ) = {y = x + tδ0 n(x), t ∈ [−1, 1], x ∈ 0 } and K be a compact subset of D0 \ D(0 , δ0 ).

123

(38)

J Sci Comput

Using the shape Taylor expansion technique (cf. [22,24,29]), we have the following approximation for the expectation Eu (x). Theorem 4.1 Let u and  u be the solutions of the random interface grating problem (14) and the deterministic interface grating problem (1)–(7) respectively. Under the assumptions of Theorem 3.4, the following shape Taylor expansion holds 1 2 (39) (d u[κ(ω), κ(ω)](x))δ 2 + O (δ 3 ) 2! for all x ∈ K , a.e. ω ∈ . Moreover, the following approximation of the expectation Eu (x) holds Eu (x) =  u (x) + O (δ 2 ), x ∈ K , a.e. ω ∈ . (40) u(x, ω) =  u (x) + (du[κ(ω)](x))δ +

The approximation (40) follows from the linearity of expectation operator E, the zero expectation assumption (12), and the uniqueness result in Remark 2.2, and we refer to [24] for the details. We introduce some preliminaries in order to approximate the variance Var u (x). By using Eq. (13) and simple calculations, we derive following identity: Var du (x) = Var(du[κ(ω)](x))

 = Cor(du[κ(ω)](x), du[κ(ω)](y))y=x  = Cor du (x, y)y=x .

(41)

Taking the expectation on both sides of (37), we obtain the tensor product interface problem for Cor du (x, y) (cf. [22]). Although the tensor product problem can be solved by the sparse grid technique (cf. [24,40]), the tensor product boundary conditions are too complicated to be used to construct the linear system. To avoid solving the tensor product problem directly, we shall use an efficient low-rank approximation based on the pivoted Cholesky decomposition to approximate Cor du (x, y). Once with the quantity Cor du (x, y) at hand, using the identity (41), we can approximate the variance Var u (x) with help of the following theorem [22]. Theorem 4.2 Under the assumptions of Theorem 3.4, the following estimate for the variance Var u (x) of the solution of the random interface grating problem (14) holds: Var u (x) = [Var du (x)]δ 2 + O (δ 3 )

 = Cor du (x, y)y=x δ 2 + O (δ 3 ), x ∈ K , a.e. ω ∈ .

(42)

In the rest of this section, we introduce the low-rank approximation for Cor κ , which is an useful tool for approximating Var u (x). Assume that Cor κ ∈ C(0 × 0 ), and let {λi } and { f i } be the eigenvalues and the corresponding eigenfunctions of the integral operator with Cor κ (x, y) as its kernel:  Cor κ (x, y) f i (y)dy. (43) (C f i )(x) = λi f i (x), x ∈ 0 with (C f i )(x) = γ0

Since the integral operator C is a compact, the eigenvalues {λi } are nonnegative and decay to zero, where the decay rate is controlled by the smoothness of Cor κ [41]. Furthermore, set C = [Cor κ (xi , x j )]i, j ∈ R N0 ×N0 , where {xi } are the nodal points of Th on the interface 0 with a total number of points N0 . It is easy to check that the following algebraic equation holds: N0  C= λi fi fiT , (44) i=1

123

J Sci Comput

where fi ∈ R N0 is the eigenvector of C corresponding to the eigenvalue λi , which is the discrete version of f i (x) in (43). The discrete version of the low rank approximation for the matrix C is given by C ≈ Cm =

m 

ci ciT ,

(45)

i=1

with m  N0 and ci ∈ R N0 , which is not necessarily an eigenvector of the matrix C. Here, we use the low rank approximation by the Pivoted Cholesky decomposition (PCD) for C (cf. [23]). Based on (44), we can use the so called principal component analysis (PCA) to approximate m of matrix C. The advantage of matrix C, where we compute the first m eigenpairs {λi , fi }i=1 PCD is that we only need several algebraic calculations to accomplish the approximation (45) without computing the eigenvalues nor eigenvectors, where only the main diagonal and the entries of the concerned columns of C need to be calculated. In case that C has exponentially decreasing eigenvalues, the pivoted Cholesky decomposition converges optimally in the sense that the rank m is uniformly bounded by the number of terms required for the spectral decomposition of C to get the error ε0 [23]. By using PCD Algorithm, we can easily obtain the following low-rank approximation Cor κ ≈ Cor κ,m =

m 

κi ⊗ κi

(46)

i=1

of the two-point correlation function of κ(ω) in the full tensor product space Vh ⊗ Vh .

5 Numerical Methods for the Expectation and the Variance In this section, we present the weak Galerkin algorithm for solving the deterministic equation of  u (x) in (40), which gives a second order approximation of the expectation of the random interface grating problem. Note from (42) that Var du δ 2 is an order O (δ 3 ) approximation of Var u (x). An efficient scheme, which is based on a low-rank approximation via the pivoted Cholesky decomposition, is proposed to compute Var du .

5.1 Weak Galerkin Method for the Expectation The solution  u (x) of (1)–(7) itself or its gradient changes rapidly near the interface due the jump of the wave number across the nominal interface 0 or the complexity of the nominal interface. The traditional finite element method can not capture the jump or oscillations efficiently, whereas the stability and high-order accuracy of the WGM enable itself to capture highly complex solutions exhibiting discontinuities or oscillations with high resolution [34, 43]. We also need certain numerical algorithms (e.g Monte Carlo method or multi-level Monte Carlo) to compute the reference Eu (x) in (40) and Var u (x) in (42) respectively, which results in solving a large number of deterministic PDEs due to the number of realizations of the random interface, to check the convergence rates of our proposed algorithms numerically. When using the WGM for solving each realization of the random interface grating problem (14), we introduce a stablizer to control the jump between solution value inside the polygon and along its boundary in the WGM. The stablizer was also been introduced in the discontinuous Galerkin method (DGM) as a penalty term, but its mechanism is different from that of WGM.

123

J Sci Comput

In DGM, there is a need to balance the original weak formulation and the penalty term. As a result, the coefficient of the stabilizer is restricted to a certain range [3]. On the other hand, there is no restriction for the coefficient of the stablizer term for WGM and it can simply be chosen as 1. Moreover, the interfaces of two or multi-materials of the realizations are always complicated, and the WGM allows arbitrary polygons as long as the partitions are shape regular [32,35]. The convergence analysis of WGM with different polygons are guaranteed under the same framework, which enables this method to be more flexible and robust. Let T ⊂ R2 be any polygonal domain with interior Ti and boundary Tb . The function 1 v = {vi , vb } is called a weak function on T , where vi ∈ L 2 (T ) and vb ∈ H 2 (Tb ). The space of weak functions defined on T is given by   1 W (T ) = v = {vi , vb } | vi ∈ L 2 (T ), vb ∈ H 2 (Tb ) , (47) and we also define

  H (div; T ) = v : v ∈ L 2 (T ), ∇ · v ∈ L 2 (T ) .

Definition 5.1 For any v ∈ W (T ), the generalized weak derivative of v with respect to xi is defined as a linear functional ∂∂dxvi on H 1 (K ) such that    ∂d v ∂q qdx = − vi dx + vb qn xi ds, ∀ q ∈ H 1 (T ). (48) ∂ xi T ∂ xi T Tb Furthermore, for any v ∈ W (T ), the weak gradient of v is defined as a linear functional ∇d v on H (div; T ) such that    ∇d v · qdx = − vi ∇ · qdx + vb q · nds, ∀ q ∈ H (div; T ), (49) T

T

Tb

where n is the unit outward normal vector to ∂ T . The weak gradient ∇d v is well defined, because for any v = {vi , vb } ∈ W (T ), the right hand side of (49) is a bounded linear functional on H (div; T ). Also, similar arguments hold for the generalized weak derivative ∂∂xdi . Denote by Pr (T ) the set of polynomials on T with degree no more than r . We can define the discrete generalized weak derivative and discrete weak gradient operator as the projections of ∂∂xdi and ∇d onto the appropriate polynomial subspaces, respectively. Definition 5.2 The discrete generalized weak derivative operator, denoted by ∂ v as the unique polynomial on T such that d,r  ∈ Pr (T ) and  T

∂d,r v qdx = − ∂ xi

∂ xi

 T

∂d,r ∂ xi

, is defined

T

∂q vi dx + ∂ xi

 Tb

vb qn xi ds, ∀ q ∈ Pr (T ).

(50)

Furthermore, the discrete weak gradient operator, denoted by ∇d,r , is defined as the unique  polynomial on T such that ∇d,r v T ∈ [Pr (T )]2 and    ∇d,r v · qdx = − vi ∇ · qdx + vb q · nds, ∀ q ∈ [Pr (T )]2 . (51) T

T

Tb

For the simplicity of notation, from now on we shall drop the subscript r in the notation ∇d,r for the discrete weak gradient.

123

J Sci Comput

Let Th be a triangulation on D0 consisting of polygons, which is assumed to be shape regular (cf.[43]). The set of edges in Th is denoted by Eh and Ehi = Eh \∂ D0 is the set of all interior edges. For T ∈ Th , let h T denote the maximal diameter of polygons/polyhedra T and h = max h T . Define following function space on Th T ∈T h

Wh = {v = {vi , vb } | {vi , vb }|T ∈ W (T ), T ∈ Th }.

(52)

For any T ∈ Th , denote by Pl (Ti ) and Pl (Tb ) the sets of polynomials on Ti and Tb with degree no more than l and l , respectively. The local discrete weak function space is defined by    Whll (T ) = v = {vi , vb }  vi ∈ Pl (Ti ), vb ∈ Pl (Tb ) . Then the global discrete weak function space is given by   Whll (D0 ) = v = {vi , vb } | {vi , vb }|T ∈ Whll (T ), ∀ T ∈ Th . Next, the weak Galerkin space, denoted by Vh , is given by

Vh = Whll (D0 ) ∩ H p1 (D0 ). For any u, v ∈ Wh , we introduce the following two sesquilinear forms:       a(u, ˆ v) = − ∇d u · ∇d vdx + k02 − α 2 uvdx T ∈T h

T

T ∈T h

T

    ∂d u + 2iα vdx + (T1 u b )v b ds + (T2 u b )v b ds, ∂ x1 1 2 T ∈T h T  h −1 s(u, v) = ρ T (Rb (u i ) − u b , Rb (v i ) − v b )∂ T ,

(53) (54)

T ∈T h 1

where Rb is the standard L 2 projection from H 1 (T ) to H 2 (Tb ) and the ρ > 0 is a parameter of stablizer with constant value. In practical computation, we set ρ = 1. The sesquilinear form s(u, v) is called a stablizer which is used to control the jump between u i and u b along the boundary of T . Then the weak Galerkin scheme for the deterministic interface grating problem (1)–(7) is given by: Find u h = {u hi , u hb } ∈ Vh , such that a2 (u h , vh ) = f 2 (vh ), ∀ vh ∈ Vh ,

(55)

ˆ v) − s(u, v) and f 2 = f 1 . where a2 (u, v) = a(u, For v ∈ Vh , the norm of v is defined by (cf.[43])  |||v||| = ∇d v 2 + vi 2 + v 2s

(56)

with ∇d v 2 =

  T ∈T h

|∇d v|2 d x, vi 2 = T

  T ∈T h

T

|v|2 d x, v 2s = s(v, v).

For any fixed T ∈ Th and e ∈ Eh , we set the local L 2 projection operators as Q i : L 2 (T ) → Pl (T ),

123

Q b : L 2 (e) → Pl (e).

J Sci Comput

Recall the weak function space on Wh in (52), we define a global projection operator Q h : Wh → Whll as follows: Q h v = {Q i (vi |T ), Q b (vb |T ), ∀ T ∈ Th }, ∀ v = {vi , vb } ∈ Wh .

(57)

For φ ∈ H 1 (T ), we also use Q b φ and Q h φ to denote Q b (φ|∂ T ) and {Q i (φ|T ), Q b (φ|∂ T )} respectively. Now, the discretization error of the expectation of the random interface grating problem (14), by solving (1)–(7) via the WGM, can be quantified by the following theorem. Theorem 5.3 Let Eu and  u h be the expectation of the solution of the random interface grating problem (14) and the weak Galerkin solution of the interface face problem (1)–(7) respectively. Under the assumptions of Theorem 3.4, the following estimate holds   |||Eu −  u h ||| ≤ C δ 2 + h  u H 2 (D − )∪H 2 (D + ) , (58) 0



with the norm v H 2 (D − )∪H 2 (D + ) := 0

and  u∈

0

H 2 (D0− ) ∪

H 2 (D0+ )

0

v 2H 2 (D − ) + v 2H 2 (D + ) , 0

0

is the solution of the interface face problem (1)–(7).

Proof It follows from Theorem 6.5 in [49] with m = 1 that u − u h ||| ≤ Ch  u H 2 (D − )∪H 2 (D + ) |||Q h 0

0

(59)

for at least C 2 -smooth interfaces. Using the standard interpolation estimate (cf. [16]), we know that ||| u − Q h u ||| ≤ Ch  u H 2 (D − )∪H 2 (D + ) . (60) 0

0

The inequality (58) then follows from the triangle inequality u h ||| ≤ |||Eu −  u ||| + ||| u − Q h u ||| + |||Q h u − u h |||, |||Eu −  Theorem 4.1, and the estimates (59)–(60).

 

5.2 The Low-Rank Approximation for the Variance In this subsection, we propose an efficient algorithm for computing the two-point correlation function Cor du , which would give a third order approximation of the variance of the solution to (14). This approximation could avoid the complicated construction of the tensor product equations. Recalling the deterministic equations (26)–(32) of the shape derivatives, the gradient of du has oscillations near the nominal interface 0 as a result of the non-homogeneous Neumann jump condition (28). This phenomenon makes the use of the standard finite element discretization of shape derivative not reasonable, while the WGM is a suitable option. The weak Galerkin technique is a stable and high-order accurate method which can easily handle partial differential equations with complex geometries, and this method is also able to capture highly complex solutions exhibiting discontinuities or oscillations with high resolution. The weak Galerkin scheme of the deterministic interface grating problem (26)–(32) for the shape derivatives du is given by: Find du h = {du hi , du hb } ∈ Vh , such that a3 (du h , vh ) = f 3 (vh ), ∀ vh ∈ Vh

(61)

123

J Sci Comput

with a3 (u, v) = a(u, ˆ v) − s(u, v) and f 3 (v) = −(κ[k 2 ]u, v)0 . Here, the sesquilinear forms a(u, ˆ v) and s(u, v) are given in (53) and (54) respectively, and the function u is the deterministic solution to (1)–(7). Now let’s consider the deterministic interface equation (26)–(32) or its weak Galerkin scheme (61). We emphasize that the shape derivatives du = du[κ] is a linear map with respect to κ. Next we consider an error estimate of the WGM for solving the shape derivative equations (26)–(32). Theorem 5.4 For given κ ∈ C 2,1 (, R2 ), let du = du[κ] and du h ∈ Vh be the exact solution of (26)–(32) and the weak Galerkin solution of (61) respectively. Then there exists a constant  h > 0 such that |||Q h (du) − du h ||| ≤ Ch du H 2 (D − )∪H 2 (D + ) , 0

0

holds for any h ∈ (0,  h). Replace the function f of the weak Galerkin scheme in [49] with f 3 in (61), then the proof of Theorem 5.4 follows from Theorem 6.5 in [49] with m = 1 for at least C 2 -smooth interfaces. Recall the low-rank approximation (46) for the two-point correlation function of κ(ω), it follows from the linearity of the mapping κ → du[κ] governed by Eqs. (26)–(32), and the commutativity between the linear mapping and the expectation operator that (see section 4.1 in [23]) m  Cor du ≈ Cor du,m = du[κi ] ⊗ du[κi ]. (62) i=1

Now, the algorithm for approximating the variance of the random interface grating problem (14) is given in Algorithm 1. Algorithm 1 Low rank approximation for the variance. N

×N

Input: matrix [Cor κ (xi , x j )]i, j ∈ R 0 0 and tolerance ε0 > 0. Output: A third order approximation of Var u (x). m κ ⊗κ = m T Step 1. Compute Cor κ ≈ i=1 i i i=1 κi κi by PCD Algorithm. Step 2. Given κi , calculate the quantity du[κi ] by solving Eqs. (26)–(32) via its weak Galerkin scheme (61). Step 3. Compute the quantity [Var du (x)] by the formula (62). Step 4. Calculate the quantity [Var du (x)]δ 2 which is an approximation of Var u (x) of order three due to the formula (42).

6 Numerical Simulations In this section, we show two numerical simulations to verify our theoretical predictions in Sect. 5. Consider the random interface grating problem (14) in D0 = [0, 2] × [−2, 2] with  = 2 and b = 2. The nominal interface 0 subdivides D0 into the upper domain D0− and the lower domain D0+ . Set the wavenumber k0− = 3.2π and k0+ = 1.6π in the domains D0− and D0+ respectively. Assume that an incoming plane wave u I = eiαx1 −iβ1 x2 with α = k1 sin θ and β1 = k1 cos θ is impinging upon the straight line {0 ≤ x1 ≤ , x2 = 2}, and the angle of incidence θ = π/6.

123

J Sci Comput

D −0

D −0 Γ0

D +0

(a)

D −0 Γ0

Γ0

D +0

(b)

D +0

(c)

Fig. 3 Three different kinds of nominal interfaces: a a straight line; b a sine-like curve; c the graph of a test function

The grating structure with a straight line is considered in the first nominal interface, which is given by   0 = (x1 , x2 ) ∈ R2 : 0 ≤ x1 ≤ 2, x2 = 0 as shown in Fig. 3a. The second nominal interface is given by a sine-like function as shown in Fig. 3b, where 0 ∈ C ∞ ⊂ C 3,1 to ensure that the random interface satisfies δ (ω) ∈ C 2,1 (0 , R2 ) almost surely for ω ∈ as required in Theorem 3.4. To show the robustness of our algorithms, the third nominal interface is the graph of a test function, which does not belong to C 3,1 . We shall show our algorithms still work well for the third nominal interface. The three nominal interfaces can be parametrized as: (a)  γ (s) = 0, s ∈ [0, 2]. (b)  γ (s) = 0.1 ⎧ sin(s), s ∈ [0, 2]. s ∈ [0, 0.9] ∪ [1.1, 2], ⎨ 0, (c)  γ (s) = s − 0.9, s ∈ (0.9, 1], ⎩ −s + 1.1, s ∈ (1, 1.1). We use the MLMCWGM to determine the expectation and the variance of random solutions of (14) (cf. [21]). For each random interface sample, the triangulation with triangles has been constructed, and the corresponding solution has been interpolated to a fix triangulation with rectangles in order to compute the expectation and the variance. The same strategy is adopted for solving the random interface grating problem (14) by our algorithm to obtain the approximated expectation the approximated variance, which means we compute the difference of concerned quantities of these two methods on the same rectangular meshes. In order to check the approximated results in Theorems 4.1 and 4.2 numerically, we set δ0 = 0.02 in (38), which means we only compute the difference between these two methods for the concerned quantities on K = D0 \ D(0 , δ0 ). All these numerical simulations have been implemented on a high-performance computer, Inspur TS10000 with 12 cores by OpenMP-MPI. Example 1 Let  γ (s) = (x1 (s), x2 (s)) be the parametrization of the nominal interface 0 , then according to (11) the random interface δ (ω) can be written as γ (s, ω) =  γ (s) + δκ(s, ω)n(s).

(63)

123

J Sci Comput

Let κ(s, ω) = a0 (ω) +

5    ai (ω) cos(iπs) + bi (ω) sin(iπs) ,

(64)

i=1

be the random interface perturbation, where ai (ω) and bi (ω) are uniformly distributed in [−1, 1] and i.i.d. random variables. The corresponding two-point correlation function is given by 5 1 Cor κ (s, t) = {cos(kπs) cos(kπt) + sin(kπs) sin(kπt)}. 3 k=0

First, we consider one realization of the random interface δ (ω), as shown in Fig. 4a, by substituting one sample of {ai (ω)} and {bi (ω)} into formulations (63) and (64). Set δ = 0.002 as the finest mesh for amplitude of perturbation. The Fig. 4b shows the real part of numerical solution of the deterministic interface grating problem on the realization of δ (ω) in Fig. 4a, and it is easy to see that the gradient of solution near the interface oscillates sharply. Table 1 shows the convergence results of the FEM and the WGM as the mesh size h refining. We see from the fifth and ninth columns that the convergence rate for L 2 -error of the WGM is of order two as expected, meanwhile the numerical convergence rate of the FEM is around 1.85 which is comparable to the WGM. From the third and seventh columns, it is easy to see that the convergence rate for |||·|||-error of the WGM is of order one as expected, while the numerical convergence rate for H 1 -error of the FEM is just around 0.75 which is the indication of numerical oscillations of the gradient of the exact solution near the interface. This means that the WGM is more efficient than the standard FEM for solving each realization of (14), as well as for the expectation and the variance of random interface grating problem (cf. [32]).



Dδ (ω)

Γ (ω) δ

+

Dδ (ω)

(a)

(b)

Fig. 4 a One realization of the random interface δ (ω); b the real part of numerical solution of the deterministic interface grating problem on the realization in (a)

123

J Sci Comput Table 1 The H 1 , L 2 , and ||| · |||-norm of the errors and the convergence rates of Example 1 h

FEM

WG

H 1 -error

Rate

L 2 -error

Rate

|||eh |||

Rate

L 2 -error

Rate

/8 /16

16.53



2.401



15.85



2.133



10.21

0.69

0.741

1.69

8.206

0.95

0.591

1.85

/32

6.278

0.70

0.218

1.76

4.160

0.98

0.156

1.92

/64

3.751

0.74

0.061

1.83

2.052

1.02

0.039

1.97

/128

2.231

0.75

0.016

1.85

1.018

1.01

0.009

2.03

2

2

2

L −error Quadratic function

L −error Quadratic function

L −error Quadratic function

f(δ) ≈ 127.15 δ 2

f(δ) ≈ 127.03 δ 2

−2

f(δ) ≈ 117.56 δ 2

−2

−2

10

10

L2 -error

L2 -error

L2 -error

10

−3

−3

−3

10

10

10 −2

δ

(a)

10

−2

δ

(b)

10

−2

δ

10

(c)

Fig. 5 The L 2 -errors of expectation for three different kinds of nominal interfaces in case of uniform distribution: a a straight line; b a sine-like curve; c the graph of a test function. The horizontal axis denotes the parameter δ increments from 0.002 to 0.02 with stepsize 0.002

Next, in order to check the convergence rate of numerical method for the expectation (58) and the variance (42), which are of second order and third order respectively with respect to the perturbation magnitude δ, we set h = /128 as the finest mesh in spatial direction and let K = [0, 2] × [−2, 2] \ [0, 2] × [−7δ0 , 7δ0 ]. The random interface grating problems (14) are solved by the MLMCWGM, as the parameter δ increments from 0.002 to 0.02 with stepsize 0.002, to get reasonable reference of Eu and Var u . Here, the number of samples on different spatial levels are given by the strategy formulated in Theorem 4.5 of [12], and we refer to [12,32] for more details about multi-level Monte Carlo method. We use the WGM to solve the deterministic interface grating problem (1)–(7) to obtain  u. Figure 5 shows the L 2 -errors of the reference Eu and the WGM solution  u as the parameter δ changing for three different nominal interfaces, which are of orders O (δ 2 ) as predicted by Theorem 5.3. Here, we adopt the monomial f (δ) = aδ 2 , a ∈ R for curve fitting of the expectation, and then the least square method is used to determine the coefficient a. Since the two-point correlation in (64) has finite many terms, the pivoted Cholesky decomposition computes an exact approximation of Cor κ according to Algorithm 1. This means we only need to compute the deterministic shape derivatives equations (26)–(32) eleven times to obtain Var du to approximate the variance, which reduces the computational cost sharply comparing to the MLMCWGM. Figure 6 shows the L 2 -errors of the reference Var u and the quantity [Var du ]δ 2 as the parameter δ changing for three different nominal interfaces, which

123

J Sci Comput 2

L −error Cubic function

L −error Cubic function

648.11 δ 3

f(δ) ≈ 809.84 δ 3

−2

10

−2

−2

L2 -error

L2 -error −3

10

−3

−3

10

10

δ

(a)

10

f(δ) ≈ 452.76 δ 3

10

10

L2 -error

f(δ) ≈

2

2

L −error Cubic function

−2

δ

(b)

10

−2

δ

10

−2

(c)

Fig. 6 The L 2 -errors of variance for three different kinds of nominal interfaces in case of uniform distribution: a a straight line; b a sine-like curve; c the graph of a test function. The horizontal axis denotes the parameter δ increments from 0.002 to 0.02 with stepsize 0.002

are of orders O (δ 3 ) as predicted by Theorem 4.2. Here, we adopt the monomial f (δ) = aδ 3 , a ∈ R for curve fitting of the variance, and then the least square method is used to determine the coefficient a. Example 2 In this example, we study the random interface with the Gauss kernel. The domain D0 , three different nominal interface 0 , and the wavenumbers k0± are chosen the same as in the previous example. Let the two-point correlation function of the random interface perturbation be  Cor κ = exp(−10 x − y 2 , (65) then the random interface δ (ω) can be written as (63). In order to check the convergence rate of numerical method for the expectation (58) and the variance (42), we set h = /128 as the finest mesh in spatial direction, K = [0, 2]×[−2, 2]\[0, 2]×[−10δ0 , 10δ0 ], and ε0 = 10−4 as the error tolerance in Algorithm 1. We also use the MLMCWGM to solve the random interface grating problems (14) to obtain a reasonable reference of Eu and Var u . The deterministic interface grating problem (1)–(7) is solved by the WGM only once to obtain  u to produce a reasonable approximation of Eu , which avoids solving large number of deterministic PDEs even with the MLMCWGM (about 104 samples). Figure 7 shows the L 2 -errors of the reference Eu and the WGM solution  u as the parameter δ changing, which are of orders O (δ 2 ) as predicted by Theorem 5.3. For the variance, we use the Algorithm 1 to compute the low-rank approximation with m = 20 as the error tolerance ε0 = 10−4 , which implies that we only need to compute the deterministic shape derivatives equations (26)–(32) twenty times to obtain Var du to approximate the variance. Figure 8 shows the L 2 -errors of the reference Var u and the quantity [Var du ]δ 2 for three different nominal interfaces, which are of orders O (δ 3 ) as predicted by Theorem 4.2. As far as oscillations of the error graphs are concerned, there is little oscillation in Example 1, where the uniform distribution of K-L coefficients is assumed. However, in Example 2, where the Gaussian kernel is assumed, we observe oscillation in the error graph, especially for the variance. The pointwise difference of Var u and [Var du ]δ 2 with δ = 0.02

123

J Sci Comput 2

2

2

L −error Quadratic function

L −error Quadratic function

L −error Quadratic function

f(δ) ≈ 59.03 δ 2

f(δ) ≈ 62.68 δ 2

f(δ) ≈ 60.38 δ 2 −2

−2

10

−2

10

−3

L2 -error

L2 -error

L2 -error

10

−3

10

10

−3

10

δ

10

−2

δ

(a)

10

−2

10

δ

(b)

−2

(c)

Fig. 7 The L 2 -errors of expectation for three different kinds of nominal interfaces in case of Gaussian kernel: a a straight line; b a sine-like curve; c the graph of a test function. The horizontal axis denotes the parameter δ increments from 0.002 to 0.02 with stepsize 0.002 2

10

10

2

L −error Cubic function

10

f(δ) ≈ 261.52 δ 3

−4

−5

−3

−4

−5

(a)

10

−3

L −error Cubic function

f(δ) ≈ 261.51 δ 3

10

10

−2

δ

10

f(δ) ≈ 358.74 δ 3

10

10

2

L −error Cubic function

L2 -error

−3

L2 -error

L2 -error

10

−4

−5

−2

δ

(b)

10

−2

10

δ

(c)

Fig. 8 The L 2 -errors of variance for three different kinds of nominal interfaces in case of Gaussian kernel: a a straight line; b a sine-like curve; c the graph of a test function. The horizontal axis denotes the parameter δ increments from 0.002 to 0.02 with stepsize 0.002

for different nominal interfaces are given in Fig. 9, which demonstrates the efficiency of our algorithm.

7 Conclusion In this paper, we present an efficient algorithm to solve the random interface grating problems (14). Given the expectation and two-point correlation function of the random interface perturbation, we can set up the approximations of expectation and variance of random interface grating problems corresponding to the orders of the perturbation magnitude δ, based on the shape derivatives and the low-rank approximation techniques. We also present the weak Galerkin algorithm for solving the deterministic equation of  u (x) in (40), which gives a sec-

123

J Sci Comput

Fig. 9 The difference of variance pointwisely for three different kinds of nominal interfaces in case of Gaussian kernel: a a straight line; b a sine-like curve; c the graph of a test function

ond order approximation of the expectation. Note from (42) that Cor du δ 2 is an approximation of order O (δ 3 ). An efficient scheme, which is based on a low-rank approximation via the pivoted Cholesky decomposition, is proposed to compute Cor du . The numerical simulations verify the efficiency of our algorithms. Acknowledgements The work of G. Bao is supported in part by a NSFC Innovative Group Fun (No. 11621101), an Integrated Project of the Major Research Plan of NSFC (No. 91630309), and an NSFC A3 Project (No. 11421110002), and the Fundamental Research Funds for the Central Universities. The work of Y.Z. Cao is supported in part by the National Science Foundation under the Grant Numbers DMS1620027 and DMS1620150. The work of K. Zhang is supported in part by China Natural National Science Foundation (91630201, U1530116, 11471141, 11771179, 11726102), and by the Program for Cheung Kong Scholars of Ministry of Education of China, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University. They also wish to thank the high performance computing center of Jilin university and computing center of Jilin province for essential computing support.

References 1. Ammari, H.: Uniqueness theorems for an inverse problem in a doubly periodic structure. Inverse Prob. 11, 823–833 (1995) 2. Arens, T., Kirsch, A.: The factorization method in inverse scattering from periodic structures. Inverse Prob. 19, 1195–1211 (2003) 3. Arnold, D.N., Brezzi, F., Cockburn, B., Marinij, L.D.: Unified analysis of discontinuous Galerkin methods for elliptic problems. SIAM J. Numer. Anal. 39, 1749–1779 (2001) 4. Babu˘ska, I., Tempone, R., Zouraris, G.E.: Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM J. Numer. Anal. 42, 800–825 (2004) 5. Bao, G., Chen, Z.M., Wu, H.J.: Adaptive finite-element method for diffraction gratings. J. Opt. Soc. Am. A 22, 1106–1114 (2005) 6. Bao, G., Cowsar, L., Masters, W.: Mathematical Modeling in Optical Science. Frontiers in Applied Mathematics, vol. 22. SIAM, Philadelphia (2001) 7. Bao, G., Dobson, D.C., Cox, J.A.: Mathematical studies in rigorous grating theory. J. Opt. Soc. Am. A 12, 1029–1042 (1995) 8. Bao, G., Dobson, D.C.: On the scattering by a biperiodic structure. Proc. Am. Math. Soc. 128, 2715–2723 (2000) 9. Bao, G., Li, P., Lv, J.: Numerical solution of an inverse diffraction grating problem from phaseless data. J. Opt. Soc. Am. A 30, 293–299 (2013) 10. Bao, G., Li, P., Wu, H.: An adaptive edge element method with perfectly matched absorbing layers for wave scattering by biperiodic structures. Math. Comput. 79, 1–34 (2009) 11. Bao, G., Zhang, H., Zou, J.: Unique determination of periodic polyhedral structures by scattered electromagnetic fields. Trans. Am. Math. Soc. 363, 4527–4551 (2011) 12. Barth, A., Schwab, C., Zollinger, N.: Multi-level Monte Carlo finite element method for elliptic PDE’s with stochastic coefficients. Numer. Math. 1, 123–161 (2011) 13. Caflisch, R.E.: Monte Carlo and quasi-Monte Carlo methods. Acta. Numer. 7, 1–49 (1998)

123

J Sci Comput 14. Canuto, C., Kozubek, T.: A fictitious domain approach to the numerical solution of PDEs in stochastic domains. Numer. Math. 107, 257–293 (2007) 15. Cao, Y.Z., Zhang, R., Zhang, K.: Finite element and discontinuous Galerkin method for stochastic Helmholtz equation in R d . J. Comput. Math. 26, 702–715 (2008) 16. Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. Classics in Applied Mathematics, vol. 40. SIAM, Philadelphia (2002) 17. Delfour, M.C., Zolesio, J.P.: Shapes and Geometries: Analysis, Differential Calculus, and Optimization. SIAM, Philadelphia (2001) 18. Dobson, D.C.: Optimal design of periodic antireflective structures for the Helmholtz equation. Eur. J. Appl. Math. 4, 321–340 (1993) 19. Elschner, J., Hsiao, G., Rathsfeld, A.: Grating profile reconstruction based on finite elements and optimization techniques. SIAM J. Appl. Math. 64, 525–545 (2004) 20. Elschner, J., Rehberg, J., Schmidt, G.: Optimal regularity for elliptic transmission problems including C 1 interfaces. Interfaces Free Bound 9, 233–252 (2007) 21. Hao, Y.L., Wang, X.S., Zhang, K.: Multi-level Monte Carlo weak Galerkin method for stochastic Brinkman problem. J. Comput. Appl. Math. 330, 214–227 (2018) 22. Harbrecht, H., Li, J.Z.: First order second moment analysis for stochastic interface problems based on low-rank approximation. ESAIM Math. Model. Numer. Anal. 47, 1533–1552 (2013) 23. Harbrecht, H., Peters, M., Schneider, R.: On the low-rank approximation by the pivoted Cholesky decomposition. Appl. Numer. Math. 62, 428–440 (2012) 24. Harbrecht, H., Schneider, R., Schwab, C.: Sparse second moment analysis for elliptic problems in stochastic domains. Numer. Math. 109, 385–414 (2008) 25. Hettlich, F.: Iterative regularization schemes in inverse scattering by periodic structures. Inverse Probl. 18, 701–714 (2002) 26. Hiptmair, R., Li, J.Z.: Shape derivatives in differential forms I: an intrinsic perspective. Ann. Mat. 192, 1077–1098 (2013) 27. Holtz, M.: Sparse grid quadrature in high dimensions with applications in finance and insurance. Lecture Notes in Computational Science and Engineering, vol. 77. Springer, Berlin (2011) 28. Ikuno, H., Yasuura, K.: Improved point-matching method with application to scattering from a periodic surface. IEEE Trans. Antennas Propag. 21, 657–662 (1973) 29. Ito, K., Reitich, F.: A high-order perturbation approach to profile reconstruction: I. Perfectly conducting gratings. Inverse Probl. 15, 1067–1085 (1999) 30. Kirsch, A.: Uniqueness theorems in inverse scattering theory for periodic structures. Inverse Prob. 10, 145–152 (1994) 31. Kleemann, N.: Shape derivatives in Kondratiev spaces for conical diffraction. Math. Method Appl. Sci. 35, 1365–1391 (2012) 32. Li, J.S., Wang, X.S., Zhang, K.: Multi-level Monte Carlo weak Galerkin method for elliptic equations with stochastic jump coefficients. Appl. Math. Comput. 275, 181–194 (2016) 33. Meecham, W.C.: Variational method for the calculation of the distribution of energy reflected from a periodic surface. J. Appl. Phys. 27, 361–367 (1956) 34. Mu, L., Wang, J.P., Wei, G.W., Ye, X., Zhao, S.: Weak Galerkin methods for second order elliptic interface problems. J. Comput. Phys. 250, 106–125 (2013) 35. Mu, L., Wang, J.P., Ye, X.: Weak Galerkin finite element method on polytopal mesh. arXiv:1204.3655v2 36. Nedelec, J.C., Starling, F.: Integral equation methods in a quasi-periodic diffraction problem for the time-harmonic Maxwell’s equations. SIAM J. Math. Anal. 22, 1679–1701 (1991) 37. Petit, R.: Diffraction d’une onde plane par une reseau metalique. Rev. Opt. 45, 353–370 (1966) 38. Petit, R. (ed.): Electromagnetic Theory of Gratings (Electromagnetic Theory of Gratings), vol. 22. Springer, Heidelberg (1980) 39. Rathsfeld, A., Schmidt, G., Kleemann, B.H.: On a fast integral equation method for diffraction gratings. Commun. Comput. Phys. 1, 984–1009 (2006) 40. Schwab, C., Hanckes, C.J.: Electromagnetic wave scattering by random surfaces: uncertainty quantification via sparse tensor BEM. IMA J. Numer. Anal. 37(3), 1175–1210 (2017) 41. Schwab, C., Todor, R.A.: Karhunen–Loéve approximation of random fields by generalized fast multipole methods. J. Comput. Phys. 217, 100–122 (2006) 42. Sokolowski, J., Zolesio, J.P.: Introduction to Shape Optimization: Shape Sensitivity Analysis. Springer, Berlin (1992) 43. Wang, J.P., Ye, X.: A weak Galerkin finite element method for second-order elliptic problems. J. Comput. Appl. Math. 241, 103–115 (2013) 44. Wang, R., Wang, X., Zhai, Q., Zhang, R.: A weak Galerkin finite element scheme for solving the stationary Stokes equations. J. Comput. Appl. Math. 302, 171–185 (2016)

123

J Sci Comput 45. Wood, R.W.: On a remarkable case of uneven distribution of light in a diffraction grating spectrum. Philos. Mag. 4, 399–402 (1902) 46. Wood, R.W., Cadilhac, M.: Étude théorique de la diffraction par un réseau. C.R. Acad. Sci. Paris 259, 2077–2080 (1964) 47. Xiu, D.B., Karniadakis, G.E.: Modeling uncertainty in flow simulations via generalized polynomial chaos. J. Comput. Phys. 187, 137–167 (2003) 48. Zhang, J.C., Zhang, K., Li, J.Z., Wang, X.S.: A weak Galerkin finite element method for the Navier–Stokes equations. Commun. Comput. Phys. 23(3), 706–746 (2018) 49. Zhang, J.C., Zhang, K., Li, J.Z., He, Z.B.: Numerical analysis of a weak Galerkin method for grating problem. Appl. Anal. 96(2), 190–214 (2017)

123

Suggest Documents