A Combination of Downward Continuation and Local Approximation ...

4 downloads 0 Views 356KB Size Report
Dec 20, 2013 - NA] 20 Dec 2013. A Combination of Downward Continuation and. Local Approximation for Harmonic Potentials. C. Gerhards∗. Abstract.
A Combination of Downward Continuation and Local Approximation for Harmonic Potentials

arXiv:1312.5856v1 [math.NA] 20 Dec 2013

C. Gerhards∗ Abstract. This paper presents a method for the approximation of harmonic potentials that combines downward continuation of globally available data on a sphere ΩR of radius R (e.g., a satellite’s orbit) with locally available data on a sphere Ωr of radius r < R (e.g., the spherical Earth’s surface). The approximation is based on a two-step algorithm motivated by spherical multiscale expansions: First, a convolution with a scaling kernel ΦN deals with the downward continuation from ΩR to Ωr , while in a second step, the result is locally refined by a convolution on Ωr with a wavelet kernel ˜ N . Different from earlier multiscale approaches, it is not the primary goal to obtain Ψ an adaptive spatial localization but to simultaneously optimize the related kernels ΦN , ˜ N in such a way that the former behaves well for the downward continuation while the Ψ latter shows a good localization on Ωr in the region where data is available. The concept is indicated for scalar as well as vector potentials. Key Words. Harmonic potentials, downward continuation, spatial localization, spherical basis functions. AMS Subject Classification. 31B20, 41A35, 42C15, 65D15, 86-08, 86A22

1

Introduction

Recent satellite missions monitoring the Earth’s gravity and magnetic field supply a large amount of data with a fairly good global coverage. They are complemented by local/regional measurements at or near the Earth’s surface. In order to obtain highresolution gravitational models, such as EGM2008 (cf. [37]), or geomagnetic models, such as NGDC-720† , it becomes necessary to combine both types of data (i.e., paying tribute to the local/regional data availability at the Earth’s surface as well as to downward continuation of the global satellite data). The upcoming Swarm satellite mission, e.g., aims at reducing the (spectral) gap between satellite data and local/regional data at or near the Earth’s surface (cf. [19]), eventually leading to more accurate crustal magnetic field models. These efforts to obtain improved measurements also underpin the necessity to improve and adapt mathematical methods to the wide range of available data. ∗

Geomathematics Group, University of Kaiserslautern, PO Box 3049, 67663 Kaiserslautern e-mail: [email protected] † http://geomag.org/models/ngdc720.html

1

In order to deal with local/regional data sets, various types of localizing spherical basis functions have been developed during the last years and decades. Among them are spherical splines (e.g., [11], [42]), spherical cap harmonics (e.g., [24], [46]), and Slepian functions (e.g., [39], [40], [43], [44]). Spherical multiscale methods go a bit further and allow a scale-dependent adaptation of scaling and wavelet kernels (see [7], [18], [26], and [41] for the early development). They are particularly well-suited to combine global and local/regional data sets of different resolution and have been applied to problems in geomagnetism and gravity field modeling in [3], [5], [13], [16], [18], [21], [22], [27], [28], [30], [34], and [35], to name a few. Matching pursuits as described, e.g., in [32] have been adapted more recently to meet the requirements of geoscientific problems (cf. [9], [10]). Their dictionary structure allows the inclusion of a variety of global and spatially localizing basis functions, of which adequate functions are selected automatically dependent on the given data. When combining satellite data on a sphere ΩR and local/regional data on a sphere Ωr of radius r < R, not only methods that are able to deal with the local/regional aspect become necessary but also those that deal with the ill-posedness of downward continuation of data on ΩR . Typically, those two problems are treated separately. Downward continuation has been studied intensively, e.g., in [2], [12], [14], [15], [29], [38] (for the more mathematical aspects) and [6], [31], [47], [48] (for a more geophysical orientation). A particular approach to regularize downward continuation is given by multiscale methods (see, e.g., [12], [14], [15], [31], [38] for the special case of spherical geometries). However, it seems that no approach intrinsically combines the two problems, especially regarding that downward continuation is required for the data on ΩR but not for the local/regional data on Ωr . It is the goal of this paper, motivated by some of the previous multiscale methods, to introduce a two-step approximation reflecting such an intrinsic combination. More precisely, in the first step only data on ΩR is used and downward continued by convolution with a scaling kernel ΦN . In the second step, the approximation is refined by ˜N. convolving the local/regional data on Ωr with a spatially localizing wavelet kernel Ψ ˜ The connection of the two steps is given by the construction of the kernels ΦN , ΨN : Both kernels are designed in such a way that they simultaneously minimize a functional that contains a penalty term for the downward continuation and a penalty term for spatial localization. Thus, it is not the goal to first get a best possible approximation from satellite data only and then refine this approximation with local/regional data. It is rather to find a balance between the data on ΩR and the data on Ωr that in some sense leads to a best overall approximation.

1.1

Brief Description of the Approach

In the exterior of the Earth, the gravity and the crustal magnetic field can be described by a harmonic potential U . From satellite measurements we obtain data F1 on a spherical orbit ΩR = {x ∈ R3 : |x| = R} and from ground or near-ground measurements data F2 in a subregion Γr of the spherical Earth surface Ωr of radius r < R (cf. Figure 1). The

2

Wr

ext

WR Gr

R r

Figure 1: The given data situation. problem to solve is ∆U = 0,

in Ωext r ,

(1.1)

U = F1 ,

on ΩR ,

(1.2)

U = F2 ,

on Γr ,

(1.3)

3 with Ωext r = {x ∈ R : |x| > r} denoting the space exterior to the sphere Ωr . Of interest to us is the restriction U + = U |Ωr , i.e., the potential at the Earth’s surface. It is wellknown that Equations (1.1) and (1.2) determine U uniquely, and therefore also U + , if F1 is a continuous function and if U is assumed to decay sufficiently fast at infinity. However, in reality F1 is only known on a set of finitely many discrete points and may contain noise. Therefore, considering F1 as well as F2 can improve the approximation of U + , at least in the subregion Γr ⊂ Ωr . Throughout this paper, we use an approximation UN of U + of the form

˜ N [F2 ], UN = TN [F1 ] + W

(1.4)

for some sufficiently large integer N . It is motivated by spherical multiscale representations as introduced in [15] and [18]: TN reflects a regularized version of the downward continuation operator, acting as a scaling transform on ΩR with the convolution kernel ΦN (x, y) =

N 2n+1 X X

n=0 k=1

1 1 Yn,k (η) . Φ∧ N (n) Yn,k (ξ) r R

(1.5)

y x We frequently use ξ and η to abbreviate the unit vectors |x| and |y| , respectively, and write r = |x|, R = |y|. Furthermore, {Yn,k }n=0,1,...;k=1,...,2n+1 denotes a set of orthonormal spherical harmonics of degree n and order k. In order to refine the approximation

3

˜ N , which acts as a wavelet transform on Γr with with local data we use the operator W the convolution kernel ⌊κN ⌋ 2n+1

˜ N (x, y) = Ψ

X X

n=0 k=1

1 1 ˜∧ Ψ N (n) Yn,k (ξ) Yn,k (η) , r r

(1.6)

where κ > 1 is a fixed constant (reflecting the higher resolution desired for the refine˜∧ ment). The coefficients Φ∧ N (n) and ΨN (n) are typically called ’symbols’ of the corre˜ ∧ (n) = Φ ˜ ∧ (n) − Φ∧ (n) r n (see sponding kernels. They are coupled by the relation Ψ N N N R ˜ ∧ (n) has been introduced as an auxiliary symbol. This Section 2.2 for details), where Φ N coupling guarantees a smooth transition from the use of global satellite data on ΩR to local data in Γr . The optimization of the kernels is done by simultaneously choosing ˜∧ symbols Φ∧ N (n), ΦN (n) that minimize a certain functional F. The functional essentially determines on how much emphasis is put on a good behaviour for the downward contin˜ N for W ˜N, uation via TN and how much on a good spatial localization of the kernel Ψ thus, also deciding on the contribution of the satellite data to the overall approximation and on the contribution of the local data in Γr . The general setting and notation as well as the choice of the functional F are described in Sections 2 and 3. Convergence results for the approximation are supplied in Section 4 and numerical tests in Section 5. In Section 6, we transfer the concept to a vectorial setting, where the gradient ∇U is approximated from vectorial data on ΩR and Γr . This is of interest, e.g., for the crustal magnetic field where the actual sought-after quantity is the vectorial magnetic field b = ∇U .

2

General Setting

As mentioned in the introduction, {Yn,k }n=0,1,...;k=1,...,2n+1 denotes a set of orthonormal spherical harmonics of degree n and order k. The Legendre polynomial of degree n is denoted by Pn . It is connected to the spherical harmonics by the addition theorem P 2n+1 2n+1 k=1 Yn,k (ξ)Yn,k (η) = 4π Pn (ξ · η), for ξ, η ∈ Ω (for brevity, we usually write Ω if the unit sphere Ω1 is meant). Aside from the space L2 (Ωr ) of square-integrable functions on Ωr , we also need the Sobolev space Hs (Ωr ), s ≥ 0. It is defined by ) ( ∞ 2n+1  X X 2 1 2s ∧ 2 Fr (n, k) < ∞ , (2.1) n+ Hs (Ωr ) = F ∈ L (Ωr ) : 2

n=0 k=1

where Fr∧ (n, k) denotes the Fourier coefficient of degree n and order k, i.e.,   Z 1 y ∧ F (y) Yn,k Fr (n, k) = dω(y). r |y| Ωr

(2.2)

The corresponding norm is canonically given by kF kHs (Ωr ) =

∞ 2n+1  X X 2 1 2s ∧ Fr (n, k) n+

2

n=0 k=1

4

! 12

.

(2.3)

A further notion that we need is ( ) N 2n+1 X X PolN = K(x, y) = K ∧ (n)Yn,k (ξ) Yn,k (η) : K ∧ (n) ∈ R ,

(2.4)

n=0 k=1

the space of all bandlimited zonal kernels with maximal degree N (as always, ξ, η denote y x ˜ N from (1.5) and (1.6) and |y| , respectively). The kernels ΦN and Ψ the unit vectors |x| are members of such spaces. Zonal means that K only depends on the scalar product y x |x| · |y| , more precisely, K(x, y) =

N X 2n + 1



n=0

K ∧ (n)Pn (ξ · η) ,

(2.5)

with Pn being the Legendre polynomial of degree n. Thus, instead of K(·, ·) acting on Ωr × ΩR or Ωr × Ωr , it can also be regarded as a function K(·) acting on the interval [−1, 1]. In both cases we just write K.

2.1

Downward Continuation

We return to Equations (1.1)–(1.3) in order to derive the approximation (1.4), reminding that we are interested in U + = U |Ωr . We start by considering only the equations (1.1), (1.2). This leads to the problem of downward continuation, the reconstruction of U + on Ωr from knowledge of F1 on ΩR , r < R. Opposed to this, the determination of F1 from U + is known as upward continuation. The operator T up : L2 (Ωr ) → L2 (ΩR ), given by Z up + K up (x, y)U + (y)dω(y), x ∈ ΩR , (2.6) F1 (x) = T [U ](x) = Ωr

with K up (x, y) =

∞ 2n+1 X X

σn

n=0 k=1

1 1 Yn,k (ξ) Yn,k (η) R r

(2.7)

and σn =

 r n R

,

(2.8)

describes the process of upward continuation. The downward continuation operator T down represents the generalized inverse of T up , and acts in the following way: Z K down (x, y)U + (y)dω(y), x ∈ Ωr , (2.9) U + (x) = T down [F1 ](x) = ΩR

with K

down

∞ 2n+1 X X 1 1 1 Yn,k (ξ) Yn,k (η) . (x, y) = σ r R n n=0 k=1

5

(2.10)

While T up is a bounded operator, this is not true for T down , which makes downward continuation an ill-posed problem. More precisely, it is exponentially ill-posed since 1 σn grows exponentially with respect to n. This is an undesirable situation, especially regarding the fact that F1 is typically contaminated by noise. One way to deal with it is a multiscale representation where T down is approximated by a sequence of bounded operators (see, e.g., [12], [14], [15], [31], and [38]). We assume ΦN to be a scaling kernel of the form (1.5) with truncation index ⌊κN ⌋ and symbols Φ∧ N (n) that satisfy 1 (a) limN →∞ Φ∧ N (n) = σn , uniformly with respect to n = 0, 1, . . ., P∞ 2n+1 ∧ (b) n=0 4π ΦN (n) < ∞, for all N = 0, 1, . . ..

The bounded scaling transform TN : L2 (ΩR ) → L2 (Ωr ) is then defined via Z ΦN (x, y)F1 (y)dω(y), x ∈ Ωr , TN [F1 ](x) =

(2.11)

ΩR

and represents an approximation of T down [F1 ]. This operator can be refined further by use of the wavelet transform Z ΨN (x, y)F1 (y)dω(y), x ∈ Ωr , (2.12) WN [F1 ](x) = ΩR

∧ ∧ where the kernel ΨN is of the form (1.5) with symbols Ψ∧ N (n) = Φ⌊κN ⌋ (n) − ΦN (n). An approximation of T down [F1 ] at the higher scale ⌊κN ⌋, for some fixed κ > 1, is then given by

T⌊κN ⌋ [F1 ](x) = TN [F1 ](x) + WN [F1 ](x),

x ∈ Ωr .

(2.13)

It has to be noted that the kernel ΨN and the wavelet transform WN lack a tilde (as opposed to representations (1.4) and (1.6)). This indicates that we have not taken data F2 in Γr into account yet. Operators and kernels with a tilde mean that information is mapped from Γr to Γr while a lack of the tilde typically indicates the mapping of information on ΩR to Γr (or Ωr , respectively).

2.2

Combination of Downward Continuation and Local Data

In order to incorporate data F2 in Γr by use of a wavelet transform, it is necessary to rewrite (2.12). Observing that F1 and F2 are only specific expressions of U on the spheres ΩR and Ωr , respectively, we find Z Z ˜ N (x, y)F2 (y)dω(y), x ∈ Ωr , Ψ (2.14) ΨN (x, y)F1 (y)dω(y) = Ωr

ΩR

˜ N is of the form (1.6) with Ψ ˜ ∧ (n) = σn Ψ∧ (n) = Φ∧ (n)σn − Φ∧ (n)σn . We where Ψ N N N ⌊κN ⌋ ˜ ∧ (n) by use of the auxiliary symbol Φ ˜ ∧ (n), so that it reads slightly modify the symbol Ψ N

N

∧ ˜∧ ˜∧ Ψ N (n) = ΦN (n) − ΦN (n)σn .

6

(2.15)

This has the effect that now two parameters are available, namely Φ∧ N (n), which reflects ˜ ∧ (n), the behaviour of the operator TN responsible for the downward continuation, and Φ N ˜ N and the behaviour of W ˜ N to a which offers a chance to control the localization of Ψ certain amount. The auxiliary symbol needs to satisfy ˜ ∧ (n) = 1, uniformly with respect to n = 0, 1, . . ., (a’) limN →∞ Φ N P∞ 2n+1 ˜ ∧ (b’) n=0 4π ΦN (n) < ∞, for all N = 0, 1, . . ..

Remembering that F2 is only available locally in Γr and paying tribute to (2.14), we define the wavelet transform Z ˜ N (x, y)F2 (y)dω(y), x ∈ Γ ˜r. ˜ N [F2 ](x) = (2.16) Ψ W Cr (x,ρ)

y x Cr (x, ρ) denotes the spherical cap {y ∈ Ωr : 1 − |x| · |y| < ρ} with radius ρ ∈ (0, 2) and ˜ r ⊂ Γr is chosen such that Cr (x, ρ) ⊂ Γr for every x ∈ Γ ˜r center x ∈ Ωr . The subset Γ ˜ and some ρ ∈ (0, 2) that is fixed in advance. The set Γr has only been introduced so that (2.16) is well-defined (in the sense that F2 is known everywhere in the integration region). One could circumvent this by simply integrating over all of Γr in (2.16) instead of the spherical cap Cr (x, ρ). The spherical cap, however, has the advantage that this ˜ N and reduce some spherical integrations to way we can make use of the zonality of Ψ integrals over the intervals [−1, 1 − ρ] or [1 − ρ, 1] later on. Summing up, the relations (2.11)–(2.16) motivate

˜ N [F2 ] UN = TN [F1 ] + W

(2.17)

˜ r (compare (1.4) in the introduction). as an approximation of U + in Γ

3

The Minimizing Functional

The actual measurements on ΩR and Γr are contaminated by noise, due to instrumental inaccuracies or due to undesired geophysical sources (in crustal magnetic field modeling, e.g., iono-/magnetospheric current systems and the Earth’s core produce signals that cannot be filtered out entirely). Thus, we assume contaminated input data F1ε1 = F1 + ε1 E1 and F2ε2 = F2 + ε2 E2 with deterministic noise E1 ∈ L2 (ΩR ), E2 ∈ L2 (Ωr ), ˜ r is then modified by and ε1 , ε2 > 0. The approximation (2.17) of U + in Γ ε ˜ N [F ε2 ], UN = TN [F1ε1 ] + W 2

(3.1)

˜N where ε stands short for (ε1 , ε2 )T . It is the aim of this paper to find kernels ΦN and Ψ ∧ ˜∧ ˜∧ (determined by the symbols Φ∧ N (n) and ΨN (n) = ΦN (n) − ΦN (n)σn , respectively) that + ε keep the error kU − UN kL2 (Γ˜ r ) small and allow some adaptations to possible a-priori 7

knowledge on F1ε1 and F2ε2 (such as noise level of the measurements and data density). We start with the estimate ε kU + − UN kL2 (Γ˜ r )

(3.2)

˜ N [F2 ]k 2 ˜ + kTN [F1 − ≤ kU + − TN [F1 ] − W L (Γr )

F1ε1 ]

˜ N [F2 − +W

F2ε2 ]kL2 (Γ˜ r )

˜ N )[U + ]k 2 ˜ + ε1 kTN [E1 ]k 2 ˜ + ε2 kW ˜ N [E2 ]k 2 ˜ . ≤ k(1 − TN T up − W L (Γr ) L (Γr ) L (Γr )

The first term on the right hand side can be split up further in the following way: ˜ N )[U + ]k 2 ˜ k(1 − TN T up − W L (Γr )

(3.3)

Z

1 1 + up + ˜

≤ k(1 − TN T )[U ]kL2 (Γ˜ r ) + ΨN (·, y)U (y)dω(y)

2 2 2 Ωr ˜r ) L (Γ



Z Z

 + 1

˜ N (·, y)U + (y)dω(y) ˜ N (·, y) U (y)dω(y)

Ψ 1 − Φ +

2

Ωr \Cr (·,ρ) 2 Ωr ˜r ) L (Γ

. ˜r ) L2 (Γ

The last term on the right hand side of (3.3) simply compensates the extension of the integration region in the two preceding terms from Cr (x, ρ) to all of Ωr . We continue with ˜ N )[U + ]k 2 ˜ k(1 − TN T up − W L (Γr )

∞ 2n+1

 1 1 ∧

X X + Y (1 − Φ∧ (n)σ ) U (n, k) ≤

n n,k N r

2 r n=0 k=1 L2 (Ωr )

∞ 2n+1

∧ 1 1

X X ˜ ∧ + ΨN (n) Ur+ (n, k) Yn,k

2 r n=0 k=1 L2 (Ωr )

∞ 2n+1

∧ 1 1

X X ∧ + ˜ N (n)) Ur (n, k) Yn,k + (1 − Φ

2 n=0 r k=1 L2 (Ωr )

Z

˜ N (·, y)U + (y)dω(y) Ψ +

2

Ωr \Cr (ρ,·) L (Ωr ) ˜ ∧ (n) 1 − Φ 1 − Φ∧ (n)σn N N +  s kU + kHs (Ωr ) ≤ sup 1 s kU kHs (Ωr ) + sup 2 n + 21 n=0,1,... 2 n + 2 n=0,1,... √

∧ 1 ˜N 2 ˜ (n) kU + kL2 (Ω ) + 2 2πr 2 Ψ kU + kL2 (Ωr ) . + sup Ψ N r L ([−1,1−ρ]) 2 n=0,1,...

(3.4)

For the last estimate on the right hand side, we observe√that, due to the zonality of the ˜ N (x, ·)kL2 (Ω \C (x,ρ)) coincides with 2πrkΨ ˜ N kL2 ([−1,1−ρ]) , where kernels, supx∈Ωr kΨ r r

Ψ ˜N

L2 ([−1,1−ρ])

=

Z 8

1−ρ −1

˜ N (t)| dt |Ψ 2

 12

.

(3.5)

˜ N [E2 ]k 2 ˜ Similar estimates can be obtained for the terms ε1 kTN [E1 ]kL2 (Γ˜ r ) and ε2 kW L (Γr ) in (3.2), so that we end up with an overall estimate ε kU + − UN kL2 (Γ˜ r )

(3.6) 1 − Φ ˜ ∧ (n) 1 − Φ∧ (n)σn N N +  s ≤ kU + kHs (Ωr ) sup + kU k sup s H (Ω ) s r 1 2 n + 12 n=0,1,... 2 n + 2 n=0,1,...  ∧  1 + ˜ N (n) sup Ψ kU k 2 (n) + ε kE k 2 + + ε1 kE1 kL2 (ΩR ) sup Φ∧ 2 2 L (Ωr ) L (Ωr ) N 2 n=0,1,... n=0,1,... √

 2 + ˜

+ 2 2πr ε2 kE2 kL2 (Ω ) + kU kL2 (Ω ) ΨN 2 . r

L ([−1,1−ρ])

r

˜ N reduces to finding symbols Φ∧ (n), Φ ˜ ∧ (n) Eventually, finding ’good’ kernels ΦN and Ψ N N ∧ ∧ ˜ ˜ that keep the right hand side of (3.6) small (note that ΨN (n) is given by ΦN (n) − Φ∧ N (n)σn ). We choose these symbols to be the minimizers of the functional ⌊κN ⌋

˜N) = F(ΦN , Ψ

X

n=0

N 2 2 X ˜∧ αN,n 1 − Φ∧ α ˜ N,n 1 − Φ (n) + N (n)σn N

+ βN

n=0

N X

n=0

(3.7)



˜ N 2 2 ΦN (n) 2 + 8π 2 r 4 Ψ , L ([−1,1−ρ])

˜ N a member of Pol⌊κN ⌋ . The suprema in (3.6) with ΦN being a member of PolN and Ψ have been changed to square sums to simplify the determination of the minimizers. All pre-factors appearing in (3.6) have been compensated into the parameters α ˜ N,n , αN,n , and βN . They decide how much emphasis is set on the approximation property, how much on the behaviour of the downward continuation, and how much on the localization ˜ N . More precisely, the first term on the right hand side of (3.7) reflects of the kernel Ψ the overall approximation error (under the assumption that undisturbed global data is available on ΩR as well as on Ωr ), the second term only measures the error due to the downward continuation of undisturbed data on ΩR . The third and fourth term can be regarded as penalty terms reflecting the norm of the regularized downward continuation operator TN and the localization of the wavelet kernel (i.e., the error made by neglecting information outside the spherical cap Cr (x, ρ)), respectively. The term in (3.6) that ∧ ˜ involves supn=0,1,... ΨN (n) has been dropped for the definition of the functional F. It does not add any additional information for the optimization process. In Section 5, we test the approximation for different α ˜ N,n , αN,n , and βN . Theorem 4.3 supplies some theoretical asymptotic conditions on the parameters in order to guarantee ε towards U + , as N → ∞. the convergence of UN

4

Theoretical Results

Sections 2 and 3 have motivated the choice of the functional F (see (3.7)) and the ε of U + (see (3.1)). In this section, we want to study the approximation approximation UN 9

more rigorously with respect to its convergence. The general idea for the proof of the convergence stems from [36] where the optimization of approximate identity kernels has been treated. We start with a lemma indicating the solution of the minimization of the functional F. Lemma 4.1. Assume that all parameters α ˜ N,n , αN,n , and βN are positive. Then there ˜ N ∈ Pol⌊κN ⌋ of the functional F in (3.7) that exist unique minimizers ΦN ∈ PolN and Ψ ∧ T ˜∧ ˜∧ are determined by the symbols φ = (Φ∧ N (1)σ1 , . . . , ΦN (N )σN , ΦN (1), . . . , ΦN (⌊κN ⌋)) which solve the linear equations Mφ = α,

(4.1)

where M=



D1 + P1 −P2 −P3 D2 + P4



,

(4.2)

and α = (αN,1 , . . . , αN,N , α ˜N,1 , . . . , α ˜ N,⌊κN ⌋ )T . The diagonal matrices D1 , D2 are given by    βN + α , D = diag α ˜ , (4.3) D1 = diag N,n 2 N,n n=0,...,⌊κN ⌋ σn2 n=0,...,N ρ  whereas P1 ,. . . , P4 are submatrices of the Gram matrix Pn,m . More pren,m=0,...,⌊κN ⌋ cisely,   ρ ρ P1 = Pn,m , P2 = Pn,m , (4.4) n=0,...,N n,m=0,...,N m=0,...,⌊κN⌋   ρ ρ P3 = Pn,m P4 = Pn,m . (4.5) n=0,...,⌊κN⌋ , n,m=0,...,⌊κN ⌋ m=0,...,N

with

ρ Pn,m

(2n + 1)(2m + 1) = 2

Z

1−ρ

Pn (t)Pm (t)dt.

(4.6)

−1

˜ N and its zonality Proof. First we observe that the representation (1.6) of the kernel Ψ together with the addition theorem for spherical harmonics imply

˜ 2 8π 2 r4 Ψ N

(4.7)

L2 ([−1,1−ρ])

8π 2 r4 = r4

Z

1−ρ

−1

⌊κN ⌋ ⌊κN ⌋

=

=

2 ⌊κN ⌋ X 2n + 1   ∧ ∧ ˜ ΦN (n) − ΦN (n)σn Pn (t) dt 4π n=0

 Z 1−ρ  X X (2n + 1)(2m + 1)  ˜ ∧ (n) − Φ∧ (n)σn ˜ ∧ (m) − Φ∧ (m)σm Φ Φ Pn (t)Pm (t)dt N N N N 2 −1 n=0 m=0

⌊κN ⌋ ⌊κN ⌋ 

X X

n=0 m=0

∧ ˜∧ Φ N (m) − ΦN (m)σm



 ∧ ρ ˜∧ Φ (n) − Φ (n)σ Pn,m . n N N

10

Inserting (4.7) into (3.7) and then differentiating the whole expression with respect to ˜∧ Φ∧ N (n)σn and ΦN (n) leads to −2αN,n (1

− Φ∧ N (n)σn ) +

⌊κN ⌋   X Φ∧ ρ ∧ N (n)σn ˜∧ 2βN − 2 Φ (m) − Φ (m)σ m Pn,m , N N σn2

(4.8)

m=0

for n = 0, . . . , N , and ˜∧ −2˜ αN,n (1 − Φ N (n)) + 2

⌊κN ⌋ 

X

m=0

 ρ ∧ ˜∧ Φ (m) − Φ (m)σ m Pn,m , N N

(4.9)

for n = 0, . . . , ⌊κN ⌋, respectively. Setting the two expressions above equal to zero, a proper reordering leads to the linear equation (4.1). At last, we observe that the matrix ρ (Pn,m )n,m=0,...,⌊κN ⌋ is positive definite (since it represents a Gram matrix of linearly independent functions) and that all appearing diagonal matrices are positive definite due to positive matrix entries. Thus, the matrix M is positive definite and the linear system (4.1) is uniquely solvable and leads to a minimum of (3.7). ˜ N is guaranteed, we can continue Now that the existence of optimized kernels ΦN , Ψ with the statement of convergence for the corresponding approximation (cf. Theorem 4.3). For the proof we need a localization result for Shannon-type kernels, more precisely, a variation of the (spherical) Riemann localization property. We borrow this result as a special case from [49]. Proposition 4.2. If F ∈ Hs (Ωr ), s ≥ 2, it holds that

Z



˜ Sh Ψ (·, y)F (y)dω(y) lim

N

N →∞ Ωr \Cr (ρ,·)

˜ Sh is the Shannon-type kernel with symbols Ψ ˜ Sh where Ψ N N ∧ ˜ Sh (n) = 0, else. and Ψ N

= 0, L2 (Ω

∧

(4.10)

r)

(n) = 1, if N +1 ≤ n ≤ ⌊κN ⌋,

Theorem 4.3. Assume that the parameters αN,n , α ˜ N,n , and βN are positive and suppose that, for some fixed δ > 0 and κ > 1, 2(N+1)

βN inf

n=0,...,⌊κN ⌋

1−( R r)

1−( R r)

2

+ (⌊κN ⌋ + 1)2 αN,n

  = O N −2(1+δ) ,

for N → ∞.

(4.11)

The same relation shall hold true for α ˜ N,n . Additionally, let every N be associated with an ε1 = ε1 (N ) > 0 and an ε2 = ε2 (N ) > 0 such that  N R = lim ε2 N = 0. lim ε1 N →∞ N →∞ r 11

(4.12)

The functions F1 : ΩR → R and F2 : Γr → R, r < R, are supposed to be such that a unique solution U of (1.1)–(1.3) exists and that the restriction U + is of class Hs (Ωr ), for some fixed s ≥ 2. The erroneous input data is given by F1ε1 = F1 + ε1 E1 and F2ε2 = F2 + ε2 E2 , with E1 ∈ L2 (ΩR ) and E2 ∈ L2 (Ωr ). If the kernels ΦN ∈ PolN and ˜ N ∈ Pol⌊κN ⌋ are the minimizers of the functional F from (3.7) and if U ε is kernel Ψ N given as in (3.1), then

ε (4.13) lim U + − UN ˜ r ) = 0. L2 (Γ N →∞

˜ Sh Proof. As an auxiliary set of kernels, we define the Shannon-type kernels ΦSh N and ΨN via the symbols  1    , n ≤ N, 1, n ≤ ⌊κN ⌋, Sh ∧ Sh ∧ σ ˜ n ΦN (n) = ΦN (n) = (4.14) 0, else, 0, else,    ˜ Sh ∧ (n) = Φ ˜ Sh ∧ (n) − ΦSh ∧ (n)σn . ΦSh represents the kernel of the so-called and Ψ N N N N truncated singular value decomposition for the downward continuation operator T down . By using the addition theorem for spherical harmonics and properties of the Legendre polynomials, we obtain N X

Sh 2 1 ˜N 2 + 8π 2 r 4 Ψ L ([−1,1−ρ]) 2 σ n n=0 2(N +1) ⌊κN ⌋  X 2n + 1 2 1 − Rr 2 ≤ βN kPn k2L2 ([−1,1]) + 8π  R 2 4π 1− r n=0  R 2(N +1) 1− r = βN + (⌊κN ⌋ + 1)2 .  R 2 1− r

˜ Sh F(ΦSh N , ΨN ) = βN

˜ N . In consequence, The kernels that minimize F are denoted by ΦN and Ψ 2 ˜∧ ˜ N ) ≤ F(ΦSh ˜ Sh ≤ F(ΦN , Ψ α ˜ N,n 1 − Φ N (n) N , ΨN ) 2(N +1) 1 − Rr + (⌊κN ⌋ + 1)2 , ≤ βN  R 2 1− r

(4.15)

(4.16)

for all n ≤ ⌊κN ⌋. In combination with (4.11), this leads to −1−δ 1 − Φ ˜∧ ), N (n) = O(N

(4.17)

˜ ∧ (n)| |1 − Φ N = 0, N →∞ n=0,1,... (n + 1 )s 2

(4.18)

−1−δ ) uniformly for all n ≤ ⌊κN ⌋ and N → ∞. The estimate |1 − Φ∧ N (n)σn | = O(N follows in the same manner and holds uniformly for all n ≤ N and N → ∞. Finally, ˜ ∧ (n) = 0 for n > ⌊κN ⌋, we end up with since Φ N

lim

sup

12

which shows that the first term on the right hand side of the error estimate (3.6) vanishes. The second term can be treated analogously. Due to the uniform boundedness of |Φ∧ J (n)σn |, there exists some constant C > 0 such that (4.12) implies lim ε1

N →∞

sup Φ∧ N (n) ≤ C lim ε1 N →∞

n=0,1,...

 N R = 0. r

(4.19)

so that the third term on the right hand side of (3.6) vanish as well. The fourth term does not vanish. However, a closer look at the derivation

Ptaking

of this term, we see that ∞ P2n+1 ˜ ∧ 1 ∧

it suffices to show that n=0 k=1 ΨN (n)Fr (n, k) r Yn,k L2 (Ωr ) tends to zero (where ˜ ∧ (n)| ≤ |1 − Φ∧ (n)σn | + |1 − F stands for U + or E2 ). Using the previous results and |Ψ N N ∧ ˜ (n)|, it follows that Φ N

2

∞ 2n+1

X X 1

∧ ∧ ˜ ΨN (n)Fr (n, k) Yn,k

r n=0 k=1

L2 (Ωr )

≤ O(N

−2δ

)+C

⌊κN ⌋ 2n+1 X X

n=N +1 k=1

∧ Fr (n, k) 2 , (4.20)

for some constant C > 0. Latter vanishes for N → ∞ since F (i.e., U + or E2 ) is of class L2 (Ωr ). In order to handle the last term of the error estimate (3.6), it has to be shown ˜ N kL2 ([−1,1−ρ]) tends to zero. This, again, is generally not true. But, taking a that kΨ closer look at the derivation of the estimate indicates that it suffices to show that

Z



˜ N (·, y)U + (y)dω(y) Ψ

Ωr \Cr (ρ,·)

L2 (Ωr )

,

Z



˜ N (·, y)E2 (y)dω(y) ε2 Ψ

Ωr \Cr (ρ,·)

vanish as N → ∞. For the left expression we obtain

Z



˜ N (·, y)U + (y)dω(y) Ψ

Ωr \Cr (ρ,·)

2 L (Ωr )

Z

˜ Sh (·, y)U + (y)dω(y) Ψ ≤

2

Ωr \Cr (ρ,·) N L (Ωr )

Z

 

+ ˜ Sh ˜ Ψ + N (·, y) − ΨN (·, y) U (y)dω(y)

2

Ωr \Cr (ρ,·) L (Ωr )

Z

˜ Sh (·, y)U + (y)dω(y) Ψ ≤

2

Ωr \Cr (ρ,·) N

(4.21) L2 (Ωr )

(4.22)

L (Ωr )

+ ˜ Sh ˜ + 2πr kΨ N − ΨN kL2 (−1,1−ρ) kU kL1 (Ωr ) , 2

where Young’s inequality has been used in the last row. Since U + ∈ Hs (Ωr ), s ≥ 2, Proposition 4.2 implies that the first term on the right hand side of (4.22) tends to zero

13

as N → ∞. The second term on the right hand side of (4.22) can be treated as follows

Sh

Ψ ˜N − Ψ ˜ N 2 2 L (−1,1−ρ) 2 ⌊κN ⌋  ∧  X 2n + 1 2 ∧ Sh ˜ ˜ kPn k2 2 = (n) Ψ (n) − Ψ N N L ([−1,1]) 4π n=0 ⌊κN ⌋



2 X 2n + 1 ∧ χ[0,N ] (n) ˜∧ Φ (n)σ − 1 Φ (n) − 1 + n N N 8π 2

(4.23)

n=0

⌊κN ⌋

=

   X 2n + 1  −2δ −2(1+δ) . = O N O N 8π 2

n=0

By χ[0,N ] (n) we mean the characteristic function, i.e., it is equal to 1 if n = 0, . . . , N , and equal to 0 otherwise. In conclusion, the left expression in (4.21) vanishes as N → ∞. For the right expression in (4.21), we obtain the desired result in a similar manner if we know that it holds true for the Shannon-type kernel. Again, using Young’s inequality, it follows

Z



˜ Sh ˜ Sh ≤ 2πr 2 ε2 kΨ Ψ (·, y)E (y)dω(y) ε2

2 N kL2 ([−1,1−ρ]) kE2 kL1 (Ωr ) . (4.24)

2

Ωr \Cr (ρ,·) N L (Ωr )

˜ Sh kL2 ([−1,1−ρ]) ≤ CN , for some constant C > 0, so In (4.15) we have already seen kΨ N that (4.12) implies that (4.24) tends to zero as N → ∞. Finally, combining all steps of the proof, we have shown that the right hand side of the error estimate (3.6) converges to zero, which yields (4.13).

Remark 4.4. The condition U + ∈ Hs (Ωr ), s ≥ 2, in Theorem 4.3 can be relaxed to U + ∈ Hs (Ωr ), s > 0, if the minimizing functional F in (3.7) is substituted by ⌊κN ⌋

˜N) = FF ilter (ΦN , Ψ

X

n=0

N X ∧ ∧ 2 ˜ ∧ (n) 2 + (4.25) α ˜ N,n KN αN,n KN (n) − Φ (n) − Φ∧ N N (n)σn n=0

N

2

X ∧ ˜N ΦN (n) 2 + 8π 2 r 4 + βN

2

Ψ

L ([−1,1−ρ])

n=0

,

∧ (n) are the symbols of a filtered kernel (compare, e.g., [50] and references where KN therein for more information)     N 2n+1 X X x y F ilter ∧ ˜N Φ (x, y) = KN (n)Yn,k Yn,k . (4.26) |x| |y| n=0 k=1

Then the proof of Theorem 4.3 follows in a very similar manner as before, just that the minimizing kernels are not compared to the Shannon-type kernel but to the filtered kernel. Opposed to the Shannon-type kernel, an appropriately filtered kernel has the ˜ F ilter kL1 ([−1,1−ρ]) = 0 which makes the condition s ≥ 2 localization property limN →∞ kΦ N + on the smoothness of U (required for Proposition 4.2) obsolete. 14

˜ N . While To finish this section, we want to comment on the localization of the kernel Ψ ˜ we have used kΨN kL2 ([−1,1−ρ]) as a measure for the localization inside a spherical cap Cr (·, ρ) (values close to zero meaning a good localization, i.e., small leakage of information into Ωr \ Cr (·, ρ)), a more suitable quantity to consider would be

Ψ ˜N 2 L ([−1,1−ρ])

. (4.27)

Ψ ˜N 2 L ([−1,1])

This is essentially the expression that is minimized for the construction of Slepian functions (see, e.g., [39], [40], [43], [44]). However, using (4.27) as a penalty term in the functional F from (3.7) would make it significantly harder to find its minimizers. Fur˜ N that minimizes the original functional actually thermore, it turns out that the kernel Ψ keeps the quantity (4.27) small as well (at least asymptotically in the sense that (4.27) ˜N vanishes for N → ∞). The latter essentially originates from the property that Ψ converges to a Shannon-type kernel (which has the desired property).

Lemma 4.5. Assume that the parameters αN,n , α ˜ N,n , and βN are positive and suppose that, for some fixed δ > 0and κ > 1, 2(N+1)

βN

1−( R r)

1−( R r)

inf

2

+ (⌊κN ⌋ + 1)2 αN,n

n=0,...,⌊κN ⌋

  = O N −2(1+δ) ,

for N → ∞.

(4.28)

The same relation shall hold true for α ˜ N,n . If the scaling kernel ΦN ∈ PolN and the ˜ wavelet kernel ΨN ∈ Pol⌊κN ⌋ are the minimizers of the functional F from (3.7), then

Ψ ˜ N 2 2 L ([−1,1−ρ]) lim = 0.

N →∞ Ψ ˜ N 2 2 L ([−1,1])

(4.29)

˜ N (t)|2 /kΨ ˜ N k2 2 Proof. The function FN (t) = |Ψ L ([−1,1]) can be regarded as the density function of a random variable t ∈ [−1, 1]. Thus, we can write

Z 1−ρ ˜ N 2 2

Ψ  L ([−1,1−ρ]) FN (t)dt = PN (t < 1 − ρ) = 1 − PN t − EN (t) ≥ 1 − ρ − EN (t) =

2

Ψ ˜N 2 −1 L ([−1,1])



VN (t) , VN (t) + 1 − ρ − EN (t)

(4.30)

where we have used Cantelli’s inequality for the last estimate. By PN (t < a) we mean the probability (with respect to the density function FN ) that t lies in the interval [−1, a) while PN (t ≥ a) means the probability of t being in the interval [a, 1]. Furthermore, EN (t) denotes the expected value of t and VN (t) = E(t2 ) − (E(t))2 the variance of t. In other words, we are done if we can show that limN →∞ EN (t) = 1 and limN →∞ VN (t) = 0.

15

First, we use the addition theorem for spherical harmonics and the recurrence relation 1 tPn (t) = 2n+1 ((n + 1)Pn+1 (t) + nPn (t)) to obtain Z

1

˜ N (t)|2 dt = t|Ψ

−1

Z 1 ⌊κN ⌋ ⌊κN ⌋ 1 X X 2n + 1 2m + 1 ˜ ∧ ˜ ∧ (m) tPn (t)Pm (t)dt Ψ (n) Ψ N N r2 4π 4π −1

(4.31)

n=0 m=0

⌊κN ⌋

=

=

 Z 1 X 2n + 1 ˜ ∧ (n) nΨ ˜ ∧ (n − 1) + (n + 1)Ψ ˜ ∧ (n + 1) |Pn (t)|2 dt Ψ N N 2 r2 N 16π −1 n=0

⌊κN ⌋ 1 X ˜∧ ˜∧ ˜∧ ˜∧ nΨN (n)Ψ N (n − 1) + (n + 1)ΨN (n)ΨN (n + 1). 8π 2 r 2 n=0

2 −2(1+δ) ) ˜∧ From (4.17) and the corresponding estimate for Φ∧ N (n)σn we find |ΨN (n)| = O(N ˜ ∧ (n)|2 = |1 − Φ ˜ ∧ (n)|2 = O(N −2(1+δ) ), for N + 1 ≤ n ≤ ⌊κN ⌋. As for n ≤ N , and |1 − Ψ N N a consequence, (4.31) implies

Z

1

˜ N (t)|2 dt t|Ψ

(4.32)

−1 ⌊κN ⌋

X 2n + 1 ˜ ∧ (n)|2 = O(N −2δ ) + |Ψ 8π 2 r 2 N n=N +1

˜ ∧ (n − 1) ˜ ∧ (n + 1) n+1 Ψ n Ψ N N + ˜ ∧ (n) ˜ ∧ (n) 2n + 1 Ψ 2n + 1 Ψ N N

!

,

 n ˜∧ ˜ ∧ (n)+ n+1 Ψ ˜∧ ˜∧ where supN +1≤n≤⌊κN ⌋ 1− 2n+1 ΨN (n − 1)/Ψ N 2n+1 N (n + 1)/ΨN (n) ≤ εN , for 2 ˜Nk 2 some εN > 0 that satisfies εN → 0 as N → ∞. The term kΨ L ([−1,1]) can be expressed similarly by ⌊κN ⌋ X 2n + 1 X 2n + 1 ∧ 2 −2δ ˜ ˜ ∧ (n)|2 . | Ψ (n)| = O(N ) + |Ψ N 8π 2 r 2 8π 2 r 2 N

⌊κN ⌋

˜ N k2 2 kΨ L ([−1,1]) =

(4.33)

n=N +1

n=0

Thus, combining (4.31)–(4.33), we obtain lim EN (t) = lim

N →∞

Z

1

N →∞ −1

2

R1

2 ˜ −1 t|ΨN (t)| dt ˜ N k2 2 N →∞ kΨ L ([−1,1])

t|FN (t)| dt = lim

= 1.

(4.34)

In a similar fashion it can be shown that limN →∞ EN (t2 ) = 1, implying limN →∞ VN (t) = 0, which concludes the proof.

5

Numerical Test

We use the MF7 model of the Earth’s crustal magnetic field as a test example (cf. [33] for details on the previous version MF6; the actual MF7 model is available online‡ ). ‡

http://www.geomag.us/models/MF7.html

16

U + , F1 , and F2 are generated from the supplied Fourier coefficients up to spherical harmonic degree 100. The noise E1 , E2 is produced by random Fourier coefficients up to spherical harmonic degree 110 and is then scaled such that kE1 kL2 (ΩR ) = kF1 kL2 (ΩR ) and kE2 kL2 (Γr ) = kF2 kL2 (Γr ) . The mean Earth radius is given by r = 6371.2km; for the satellite orbit we choose R = 7071.2km (i.e., a satellite altitude of 700km above the ε and choose κ > 0 such that Earth’s surface). We fix N = 80 for the approximation UN ε is computed for the following different settings: ⌊κN ⌋ = 100. Then UN (1) data F1 is available on all of ΩR ; data F2 is given in a spherical cap Γr = Cr (x0 , ̺) around the North pole x0 such that we can assume to have data available in every ˜ r = Cr (x0 , ̺ − ρ), with varying radius ρ = 0.5, 0.1, 0.01, spherical cap Cr (x, ρ), x ∈ Γ (2) the parameters of the functional F are chosen among the cases βN = 10−3 , . . . , 102 , ˜ N,n , α ˜ N,n = 10−3 , . . . , 104 , as well as αN,n = α ˜ N,n or αN,n = 51 α (3) the noise level on ΩR is varied among ε1 = 0.001, 0.01, 0.05, 0.1; for the noise in Γr we choose ε2 = γε1 , with γ = 1, 2, 5. ε we use the scheme For the numerical integration required for the evaluation of UN from [8] on ΩR and the scheme from [25] in Cr (x, ρ). Since both integration methods are ˜ N are bandlimited, polynomially exact and since the input data and the kernels ΦN and Ψ we do not obtain any error from the numerical integration (the discrete points on ΩR and in Cr (x, ρ) at which F1 and F2 are available are chosen such that [8] and [25] can be applied). Thus, any error produced during the approximation procedure comes either ˜ N in Cr (x, ρ). from the data noise or from an insufficient localization of Ψ ε,Sh As a reference for the optimized kernels we also compute the approximation UM for  ∧ 1 Sh Sh Shannon-type kernels: Namely, we choose ΦM with symbols ΦM (n) = σn , if n ≤ M , ∧  ˜ Sh ˜ Sh ∧ (n) = 1, if M +1 ≤ n ≤ 100, and ΦSh M  (n) = 0, else, as well as ΨM with symbols ΨM ˜ Sh ∧ (n) = 0, else. The cut-off degree M is varied among M = 0, 30, 50, 80. The and Ψ M ˜ N via Shannon-type kernels form a reasonable reference since the optimization of ΦN , Ψ F is done with respect to a Shannon-type situation. The actual results of the numerical test are supplied in Tables 1–3. Each table refers to a different spherical cap radius ρ in which we assume the data F2 to be given. The second and third column always show the Shannon-type kernels that performed best, the fourth and fifth column the optimized kernel for two particular choices of parameters βN , α ˜ N,n , αN,n . Table 4 shows the results if we use satellite data only and Shannon-type kernels ΦSh M with M = 50, 60, 70, 80, 100. This essentially represents a truncated singular value decomposition (TSVD) of the downward continuation problem (the corresponding ε,T SV D approximation is denoted by UM ). We can make the following observations:

(1) For a good set of parameters βN , α ˜ N,n , αN,n , the optimized kernels yield better results than the Shannon-type kernels (the smallest relative errors are indicated in italic in Tables 1–3). (2) A single parameter choice (e.g., βN = 101 , α ˜ N,n = 5αN,n = 104 in Tables 1 and 2) can perform well over a wide range of different settings, i.e., different ε1 , ε2 , ρ. 17

(3) For small noise levels ε1 = 0.001, 0.01, the main error source is the localization ˜ N , which can be seen from the fact that the relative error of our of the kernel Ψ approximations does not change significantly between these two noise levels. Furthermore, Figure 2(a) indicates that the parameters βN , α ˜ N,n , αN,n that perform well for noise levels ε1 = ε2 favour ground data in Cr (x, ρ) over satellite data on ΩR : the symbols Φ∧ N (n) that are responsible for the downward continuation are ˜ ∧ (n) has significant influence for degrees clearly damped for degrees n ≥ 20 while Ψ N n ≥ 20 (dotted red lines in Figure 2(a)). Opposed to that, if ε2 > ε1 (i.e., the noise level in Cr (x, ρ) is higher than the noise level on ΩR ), more influence is given ˜∧ to the downward continuation via Φ∧ N (n), and ΨN (n) gains influence not before degrees n ≥ 30 (solid red lines in Figure 2(a)). (4) For a small spherical cap radius (e.g., ρ = 0.01 in Table 3) the results are more sensitive to the parameter choice βN , α ˜ N,n , αN,n . When the noise level is small and when ε1 = ε2 , then those parameters are preferred that yield a stronger focus on the ground data in Cr (x, ρ) and a stronger damping of the satellite data on ΩR (similar to the situation described in (3); compare dotted red lines in Figure 2(b)). When the noise level ε2 > ε1 , then significantly more influence is given to the satellite data: Φ∧ N (n) closely follows the behaviour of the Shannon-type kernel, i.e., the truncated singular value decomposition (compare the solid red line in the ˜ ∧ (n) we see that its influence is shifted towards left plot of Figure 2(b)). For Ψ N higher and higher degrees n and that the spectral behaviour becomes inconsistent ˜ ∧ (n) is smaller than 0 for n ≈ 5 and larger than 1 for n ≈ 70, 80; solid red line (Ψ N in the right plot of Figure 2(b)). This might be an indicator that the radius ρ should not be significantly smaller than 0.01 in order to obtain reasonable results for cut-off degree ⌊κN ⌋ = 100 (ρ = 0.01 represents a radius of around 900km at the Earth’s surface while spherical harmonic degrees n ≤ 100 correspond to wavelengths of more than 400km). Or in other words, ⌊κN ⌋ should be enlarged if data is only available in a spherical cap Cr (x, ρ) with ρ significantly smaller than 0.01. (5) Table 4 shows that the usage of global satellite data only leads to worse results than a combination of satellite data on ΩR and local/regional ground or near-ground data in Γr (may it be via Shannon-type or via optimized kernels). An exception is given for large noise levels ε2 > ε1 in Γr (e.g., ε2 = 5ε1 , ε1 = 0.1; cf. last rows in Tables 3(c) and 4). However, the optimized kernels still behave slightly better. Concluding, we see that for a good parameter choice βN , α ˜ N,n , αN,n the optimized kernels yield better results than Shannon-type kernels. However, the localization in Cr (x, ρ) does not necessarily improve dramatically (as seen from the fact that in most cases the improvement for low noise levels ε1 , ε2 is rather small), simply because the optimzed kernels are connected to Shannon-type kernels by the functional F. A simple way out is given in Remark 4.4: Instead of optimizing with respect to a Shannon-type kernel, one can optimize with respect to some filtered kernel with better localization ˜ N inherit the localization properties of the properties. This way, the kernels ΦN , Ψ 18

Shannon: M =0 0.022 0.024 0.051 0.095

Shannon: M = 30 0.058 0.057 0.067 0.094

Optimized: βN = 101 , α ˜ N,n = 5αN,n = 104 0.022 0.024 0.045 0.081

Optimized: βN = 102 , α ˜ N,n = αN,n = 104 0.022 0.023 0.045 0.082

(a)

ε1 (ε2 = ε1 ) 0.001 0.01 0.05 0.1

(b)

ε1 (ε2 = 2ε1 ) 0.001 0.01 0.05 0.1

Shannon: M =0 0.022 0.029 0.095 0.186

Shannon: M = 30 0.058 0.058 0.090 0.155

Optimized: βN = 101 , α ˜ N,n = 5αN,n = 104 0.023 0.026 0.079 0.153

Optimized: βN = 101 , α ˜ N,n = αN,n = 104 0.022 0.027 0.078 0.152

(c)

ε1 (ε2 = 5ε1 ) 0.001 0.01 0.05 0.1

Shannon: M =0 0.023 0.051 0.231 0.461

Shannon: M = 30 0.057 0.066 0.186 0.363

Optimized: βN = 101 , α ˜ N,n = 5αN,n = 104 0.022 0.043 0.189 0.376

Optimized: βN = 100 α ˜ N,n = 5αN,n = 104 0.023 0.041 0.179 0.357

ε,Sh ε − Table 1: Relative errors kUN − U + kL2 (Γ˜ r ) /kU + kL2 (Γ˜ r ) and kUM + + U kL2 (Γ˜ r ) /kU kL2 (Γ˜ r ) for spherical cap radius ρ = 0.5. The noise level ε2 in Γr is varied among (a)–(c); italic numbers indicate the smallest relative error for each noise level ε1 , ε2 .

filtered kernels but still can be expected to improve the overall approximation result (in the same way as they did for the Shannon-type approach).

19

Shannon: M =0 0.029 0.030 0.049 0.084

Shannon: M = 30 0.066 0.067 0.076 0.099

Optimized: βN = 101 , α ˜ N,n = 5αN,n = 104 0.030 0.030 0.046 0.076

Optimized: βN = 101 , α ˜ N,n = αN,n = 103 0.029 0.030 0.045 0.077

(a)

ε1 (ε2 = ε1 ) 0.001 0.01 0.05 0.1

(b)

ε1 (ε2 = 2ε1 ) 0.001 0.01 0.05 0.1

Shannon: M =0 0.029 0.033 0.084 0.159

Shannon: M = 30 0.062 0.068 0.098 0.157

Optimized: βN = 101 , α ˜ N,n = 5αN,n = 104 0.030 0.032 0.076 0.142

Optimized: βN = 100 , α ˜ N,n = 5αN,n = 103 0.029 0.032 0.073 0.137

(c)

ε1 (ε2 = 5ε1 ) 0.001 0.01 0.05 0.1

Shannon: M =0 0.030 0.049 0.198 0.392

Shannon: M = 30 0.066 0.076 0.188 0.355

Optimized: βN = 101 , α ˜ N,n = 5αN,n = 104 0.030 0.046 0.177 0.349

Optimized: βN = 100 , α ˜ N,n = αN,n = 104 0.029 0.042 0.159 0.315

ε,Sh ε −U + k + + + Table 2: Relative error kUN ˜ r ) /kU kL2 (Γ ˜ r ) and kUM −U kL2 (Γ ˜ r ) /kU kL2 (Γ ˜r ) L2 (Γ for spherical cap radius ρ = 0.1. The noise level ε2 in Γr is varied among (a)–(c); italic numbers indicate the smallest relative error for each noise level ε1 , ε2 .

20

Shannon: M =0 0.046 0.046 0.058 0.090

Shannon: M = 50 0.119 0.118 0.130 0.168

Optimized: βN = 102 , α ˜ N,n = αN,n = 104 0.051 0.050 0.057 0.085

Optimized: βN = 10−1 , α ˜ N,n = αN,n = 104 0.038 0.044 0.122 0.236

(a)

ε1 (ε2 = ε1 ) 0.001 0.01 0.05 0.1

(b)

ε1 (ε2 = 2ε1 ) 0.001 0.01 0.05 0.1

Shannon: M =0 0.046 0.047 0.090 0.168

Shannon: M = 50 0.119 0.118 0.141 0.203

Optimized: βN = 102 , α ˜N,n = αN,n = 104 0.051 0.050 0.085 0.158

Optimized: βN = 10−1 , α ˜ N,n = αN,n = 104 0.039 0.044 0.129 0.252

(c)

ε1 (ε2 = 5ε1 ) 0.001 0.01 0.05 0.1

Shannon: M =0 0.046 0.058 0.209 0.418

Shannon: M = 50 0.119 0.121 0.206 0.370

Optimized: βN = 102 , α ˜ N,n = αN,n = 104 0.051 0.058 0.197 0.395

Optimized: βN = 10−3 α ˜ N,n = 5αN,n = 102 0.053 0.057 0.159 0.315

ε,Sh ε −U + k + + + Table 3: Relative error kUN ˜ r ) /kU kL2 (Γ ˜ r ) and kUM −U kL2 (Γ ˜ r ) /kU kL2 (Γ ˜r ) L2 (Γ for spherical cap radius ρ = 0.01. The noise level ε2 in Γr is varied among (a)–(c); italic numbers indicate the smallest relative error for each noise level ε1 , ε2 .

ε1 0.001 0.01 0.05 0.1

TSVD: M = 50 0.309 0.309 0.311 0.326

TSVD: M = 60 0.206 0.205 0.254 0.384

TSVD: M = 70 0.188 0.202 0.594 1.181

TSVD: M = 80 0.106 0.292 1.353 2.696

TSVD: M = 100 0.285 2.849 14.243 28.486

ε,T SV D Table 4: Relative error kUM − U + kL2 (Γ˜ r ) /kU + kL2 (Γ˜ r ) for spherical cap radius ρ = 0.01. Italic numbers indicate the smallest relative error for each noise level ε1 .

21

(a) ~

^

FN (n)

YN^(n)

60

1.5

~ =5a =104 bN=101, a N,n N,n 0 ~ 4 bN=10 , a N,n=aN,n=10 Shannon, M=80

50 40

1

30 20

0.5

10 0

0

5

10

15

20

25

30

35

0

40

0

10

20

30

40

50

n

60

70

80

90

100

n

(b) ~

^

FN (n)

^

YN (n)

60

1.5

~ =a =10 bN=10 , a N,n N,n -3 2 bN=10 , ~ aN,n=5aN,n=10 Shannon, M=80 2

50 40

4

1

30

0.5

20 0 10 0

0

5

10

15

20

25

30

35

-0.5

40

0

10

n

20

30

40

50

60

70

80

90

100

n

Figure 2: Exemplary plots of the spectral behaviour of some of the optimized kernels used (a) in Table 1 and (b) in Table 3.

6

The Vectorial Case

In some geophysical problems, especially in geomagnetism, it is not the scalar potential U we are interested in but the vectorial gradient ∇U . Just as well, it is often the gradient ∇U that is measured on ΩR and Γr . Thus, we are not confronted with the scalar equations (1.1)–(1.3) but with the vectorial problem b = ∇U, in Ωext r ,

(6.1)

Ωext r ,

(6.2)

b = f1 , on ΩR ,

(6.3)

b = f2 , on Γr .

(6.4)

∆U = 0,

in

Notationwise, we typically use lower-case letters to indicate vector fields, upper case letters for scalar fields, and boldface upper-case letters for tensor fields (the abbreviation b for ∇U is simply chosen from common notation in geomagnetism). Starting from equations (6.1)–(6.4), the general procedure for approximating b+ = b|Ωr is essentially the same as for the scalar case treated in the previous sections. Therefore, we will be rather brief on the description and omit the proofs. An exception is given by Proposition 6.3, where we supply a vectorial counterpart to the localization property of Proposition 4.2, and by Lemma 6.5 in order to indicate how the vectorial setting can be reduced to the previous scalar results. 22

Before dealing with the actual problem, it is necessary to introduce some basic vectorial framework. Here, we mainly follow the course of [17] and use the following set of vector spherical harmonics: (1)

yn,k (ξ) = ξYn,k (ξ) , 1 (2) ∇∗ξ Yn,k (ξ) , yn,k (ξ) = p n(n + 1) 1 (3) yn,k (ξ) = p L∗ξ Yn,k (ξ) , n(n + 1)

(6.5)

P(1,1) (ξ, η) = ξ ⊗ η Pn (ξ · η) , n 1 ∇∗ ⊗ ∇∗η Pn (ξ · η) , P(2,2) (ξ, η) = n n(n + 1) ξ 1 L∗ ⊗ L∗η Pn (ξ · η) . P(3,3) (ξ, η) = n n(n + 1) ξ

(6.8)

(6.6) (6.7)

(1)

for n = 1, 2, . . ., and k = 1, . . . , 2n + 1, with yn,k additionally being defined for n = 0 and k = 1. The operator ∇∗ξ denotes the surface gradient, i.e., the tangential contribution ∂ x of the gradient ∇x (more precisely, ∇x = ξ ∂r + 1r ∇∗ξ , with ξ = |x| and r = |x|). The ∗ ∗ surface curl gradient Lξ stands short for ξ ∧ ∇ξ (with ∧ being the vector product of two vectors x, y ∈ R3 ). Together, the functions (6.5)–(6.7) form an orthonormal basis of the space l2 (Ω) of vectorial functions that are square-integrable on the unit sphere. They (i,i) are complemented by a set of tensorial Legendre polynomials Pn of degree n and type (i, i) that are defined via

(6.9) (6.10)

The operator ⊗ denotes the tensor product x ⊗ y = xy T of two vectors x, y ∈ R3 . In analogy to the scalar case, vector spherical harmonics and tensorial Legendre polynomials are connected by an addition theorem. Since we are only dealing with vector fields of the form ∇U in this section, the vector spherical harmonics of type i = 3 and the tensorial Legendre polynomials of type (i, i) = (3, 3) are not required and will be neglected for the remainder of this paper. The vectorial counterpart to the Sobolev space Hs (Ωr ) is defined as 2 X ∞ 2n+1  o n X X 2 1 2s (i) ∧ (fr ) (n, k) < ∞ , (6.11) n+ hs (Ωr ) = f ∈ l2 (Ωr ) : 2

i=1 n=0i k=1

(i)

where (fr )∧ (n, k) denotes the Fourier coefficient of degree n, order k, and type i, i.e.,   Z 1 (i) y f (y) · yn,k (fr(i) )∧ (n, k) = dω(y). (6.12) r |y| Ωr

The corresponding norm is given by kf khs (Ωr ) =

∞ 2n+1 2 X  X X 2 1 2s (i) ∧ (fr ) (n, k) n+

2

i=1 n=0i k=1

23

! 21

,

(6.13)

where 0i = 0 if i = 1 and 0i = 1 if i = 2. The space of bandlimited tensorial kernels with maximal degree N is defined as PolN =

(

K(x, y) =

2 X N 2n+1 X X

(i) K∧ (n)yn,k

i=1 n=0i k=1

(ξ) ⊗

(i) yn,k

)

(η) : K∧ (n) ∈ R .

(6.14)

The kernels PolN are tensor-zonal. In particular, this means that the absolute value |K(x, y)| depends only on the scalar product ξ · η (this does not have to hold true for the kernel K(x, y) itself). Remark 6.1. In geomagnetic modeling, another set of vector spherical harmonics is used more commonly than the one applied in this paper. We have used the basis (6.5)– (6.7) since it is generated by simpler differential operators, which reduces the effort to obtain a vectorial version of the localization principle later on. However, both basis systems eventually yield the same results. More information on the other basis system and its application in geomagnetism can be found, e.g., in [1], [20], [21], [23], and [34]. Similar to the scalar case in Subsection 2.1, there is a vectorial upward continuation operator tup and a vectorial downward continuation operator tdown , defined via tensorial kernels with singular values  r n+1

(6.15)

1 (i) 1 (i) Φ∧ y (η) . N (n) yn,k (ξ) ⊗ r R n,k

(6.17)

σn =

R

n and σ1n , respectively (note that in the scalar case we had σn = Rr ). The downward continuation operator can be approximated by a bounded operator tN : l2 (ΩR ) → l2 (Ωr ): Z ΦN (x, y)f1 (y)dω(y), x ∈ Ωr , tN [f1 ](x) = (6.16) ΩR

with ΦN (x, y) =

N 2n+1 2 X X X i=1 n=0i k=1

The symbols Φ∧ N (n) need to satisfy the same conditions (a), (b) as for the scalar analogue in Subsection 2.1 (again, note the slight change of σn ). A refinement by local data in Γr ˜ r ) → l2 (Γ ˜ r ): is achieved by the vectorial wavelet operator w ˜N : l2 (Γ Z ˜ N (x, y)f2 (y)dω(y), x ∈ Γ ˜r, Ψ w ˜N [f2 ](x) = (6.18) Cr (x,ρ)

with ˜ N (x, y) = Ψ

2 ⌊κN X X⌋ 2n+1 X i=1 n=0i k=1

˜ ∧ (n) 1 y (i) (ξ) ⊗ 1 y (i) (η) , Ψ N r n,k r n,k

24

(6.19)

for some fixed κ > 1, and ∧ ˜∧ ˜∧ Ψ N (n) = ΦN (n) − ΦN (n)σn .

(6.20)

˜ ∧ (n) are assumed to satisfy conditions (a’) and (b’) from Subsection 2.2. The symbols Φ N As in the scalar case, the input data f1 , f2 is assumed to be perturbed by deterministic noise e1 ∈ l2 (ΩR ) and e2 ∈ l2 (Ωr ), so that we are dealing with f1ε1 = f1 + ε1 e1 and f2ε2 = f2 + ε2 e2 . An approximation of b+ (i.e., the restriction of the solution b of (6.1)– ˜ r is then defined by (6.4) to Ωr ) in Γ bεN = tN [f1ε1 ] + w ˜N [f2ε2 ].

(6.21)

A similar error estimate as in (3.6) leads to kb+ − bεN kl2 (Γ˜ r )

(6.22) 1 − Φ ˜ ∧ (n) 1 − Φ∧ (n)σn N N +  s ≤ kb+ khs (Ωr ) sup + kb k sup s h (Ω ) s r 1 2 n + 12 n=0,1,... 2 n + 2 n=0,1,...   ∧ 1 + ˜ N (n) + ε1 ke1 kl2 (ΩR ) sup Φ∧ (n) + ε ke k + kb k sup Ψ 2 2 2 2 l (Ωr ) l (Ωr ) N 2 n=0,1,... n=0,1,... √

 2 + ˜

. + 2 2πr ε2 ke2 kl2 (Ω ) + kb kl2 (Ω ) ΨN 2 r

L ([−1,1−ρ])

r

For the last term on the right hand side of (6.22), it should be observed that the tensor˜ N (x, ·)kL2 (Ω \C (x,ρ)) coincides with ˜ N implies that sup ˜ kΨ zonality of the kernels Ψ r r x∈Γr √ ˜ N kL2 ([−1,1−ρ]) (the representation, however, is not as a one-dimensional norm 2πrkΨ basic as in the scalar case (3.5)). More details on the required tools for a vectorial and tensorial setup can be found, e.g., in [17]. In order to keep the approximation error ˜ N to minimize the functional (6.22) small, we choose ΦN and Ψ ⌊κN ⌋

˜ N) = F(ΦN , Ψ

X

n=0

N 2 X 2 ˜∧ α ˜ N,n 1 − Φ αN,n 1 − Φ∧ (n) + N N (n)σn n=0

N X ∧

˜ N 2 2 ΦN (n) 2 + 8π 2 r 4 Ψ + βN . L ([−1,1−ρ])

(6.23)

n=0

Now, we are all set to state the vectorial counterparts to the theoretical results from Section 4. As mentioned earlier, the proofs are mostly omitted due to their similarity. Lemma 6.2. Assume that all parameters α ˜ N,n , αN,n , and βN are positive. Then there ˜ N ∈ Pol⌊κN ⌋ of the functional F in (6.23) exists unique minimizers ΦN ∈ PolN and Ψ ∧ T which ˜∧ ˜∧ that are determined by φ = (Φ∧ N (1)σ1 , . . . , ΦN (N )σN , ΦN (1), . . . , ΦN (⌊κN ⌋)) solves the linear equations Mφ = α, 25

(6.24)

where M=



D1 + P1 −P2 −P3 D2 + P4



,

(6.25)

and α = (αN,1 , . . . , αN,N , α ˜N,1 , . . . , α ˜ N,⌊κN ⌋ )T . The diagonal matrices D1 , D2 are given by    βN + α D1 = diag , (6.26) , D = diag α ˜ N,n 2 N,n n=0,...,⌊κN ⌋ σn2 n=0,...,N ρ  whereas P1 ,. . . , P4 are submatrices of the Gram matrix Pn,m . More pren,m=0,...,⌊κN ⌋ cisely,   ρ ρ P1 = Pn,m , P2 = Pn,m , (6.27) n=0,...,N n,m=0,...,N m=0,...,N⌊κN⌋   ρ ρ P3 = Pn,m P4 = Pn,m . (6.28) n=0,...,⌊κN⌋ , n,m=0,...,⌊κN ⌋ m=0,...,N

with

ρ Pn,m =

(2n + 1)(2m + 1) 16π 2 +

Z

1−ρ

Pn (t) Pm (t) dt

(6.29)

−1

 Z 1−ρ Z 1−ρ (2n + 1)(2m + 1) 2 ′ ′ ′′ (1 + t )P (t)P (t)dt + (1 − t2 )2 Pn′′ (t)Pm (t)dt n m 16π 2 n(n + 1)m(m + 1) −1 −1  Z 1−ρ Z 1−ρ 2 ′ ′′ 2 ′′ ′ t(1 − t )Pn (t)Pm (t)dt , t(1 − t )Pn (t)Pm (t)dt − − −1

−1

R 1−ρ ρ for n 6= 0 and m 6= 0. If n = 0 or m = 0, then Pn,m = (2n+1)(2m+1) −1 Pn (t) Pm (t) dt. 16π 2 By Pn′ , Pn′′ we mean the first and second order derivatives of the Legendre polynomials. To show that the left expression of (4.21) vanishes in the scalar case as N → ∞, we used the localization property from Proposition 4.2. A similar result is needed to prove Theorem 6.4. The corresponding vectorial localization property for the Shannon-type kernel is stated in the next proposition. Proposition 6.3. If f ∈ hs (Ωr ), s ≥ 2, it holds that

Z

˜ Sh (·, y)f (y)dω(y) Ψ lim

N

2 N →∞ Ωr \Cr (ρ,·)

= 0,

(6.30)

l (Ωr )

˜ Sh is the tensorial Shannon-type kernel with symbols Ψ ˜ Sh where Ψ N N  ∧ ˜ Sh (n) = 0, else. n ≤ ⌊κN ⌋, and Ψ N

26

∧

(n) = 1, if N + 1 ≤

Proof. We first observe that Z ˜ Sh (x, y)f (y)dω(y) Ψ N

(6.31)

Ωr \Cr (x,ρ)



Z

=

Ωr \Cr (x,ρ)



 1 (i) (i) y (ξ) ⊗ yn,k (η) f (rη)dω(rη) r n,k r

⌊κN ⌋ 2n+1 2 X X X 1 i=1 n=N +1 k=1

⌊κN ⌋ 2n+1 2 X X X 1 (i) = y (ξ) r n,k i=1 n=N +1 k=1

Z

1 (i) y (η) · f (rη)dω(rη), r n,k

Ωr \Cr (x,ρ)

y |y| .

x and η = Since f is of class hs (Ωr ), s ≥ 2, it follows where, as always, ξ = |x| ∗ that f (x) = ξF1 (x) + ∇ξ F2 (x) for some scalar functions F1 of class Hs (Ωr ) and F2 of class Hs+1 (Ωr ). Taking a closer look at the terms of type i = 1 in (6.31), using the orthogonality of ξ and ∇∗ξ , we obtain 2n+1 X k=1

=

1 (1) y (η) · f (rη)dω(rη) r n,k

(6.32)

Ωr \Cr (x,ρ)

2n+1 X

1 (1) y (ξ) r n,k

2n+1 X

1 ξYn,k (ξ) r2

k=1

=

Z

1 (1) y (ξ) r n,k

k=1

Z

Ωr \Cr (x,ρ)

 1 (1) yn,k (η) · ηF1 (rη) + ∇∗η F2 (rη) dω(rη) r

Z

Yn,k (η) F1 (rη)dω(rη)

Ωr \Cr (x,ρ)

Z

1 = 2ξ r

2n + 1 Pn (ξ · η) F1 (rη)dω(rη). 4π

Ωr \Cr (x,ρ)

For the terms of type i = 2, the use of Green’s formulas and the addition theorem for spherical harmonics implies Z 2n+1 X 1 (2) 1 (2) ξ∧ yn,k (ξ) y (η) · f (rη)dω(rη) (6.33) r r n,k k=1

=ξ∧

Ωr \Cr (x,ρ)

2n+1 X k=1

1 (2) y (ξ) r n,k

Z

Ωr \Cr (x,ρ)

1 1 p ∇∗ Yn,k (η) · ∇∗η F2 (rη)dω(rη) r n(n + 1) η

= −ξ ∧

2n+1 X

1 1 ∇∗ Yn,k (ξ) r 2 n(n + 1) ξ

+ξ∧

2n+1 X

1 1 ∇∗ Yn,k (ξ) r 2 n(n + 1) ξ

k=1

k=1

Z

Ωr \Cr (x,ρ)

Z

27

∂Cr (x,ρ)

 ∆∗η Yn,k (η) F2 (rη)dω(rη)

F2 (rη)

∂ Yn,k (η) dσ(rη) ∂νη

=

2n+1 X k=1

1 ∗ L Yn,k (ξ) r2 ξ

Z

Yn,k (η) F2 (rη)dω(rη)

Ωr \Cr (x,ρ)

Z 2n + 1 ∗ ∂ 1 1 F2 (rη) Lξ Pn (ξ · η) dσ(rη) + 2 r n(n + 1) ∂Cr (x,ρ) 4π ∂νη Z 1 2n + 1 ∗ =− 2 F2 (rη) Lη Pn (ξ · η) dω(rη) r 4π Ωr \Cr (x,ρ)

Z 1 1 2n + 1 ∗ ∂ + 2 F2 (rη) Lξ Pn (ξ · η) dσ(rη) r n(n + 1) ∂Cr (x,ρ) 4π ∂νη Z 1 2n + 1 = 2 Pn (ξ · η) L∗η F2 (rη)dω(rη) r 4π Ωr \Cr (x,ρ)

Z 1 2n + 1 τη F2 (rη) Pn (ξ · η) dσ(rη) 2 r ∂Cr (x,ρ) 4π Z 1 2n + 1 ∗ ∂ 1 F2 (rη) Lξ Pn (ξ · η) dσ(rη) + 2 r n(n + 1) ∂Cr (x,ρ) 4π ∂νη +

By ∂ν∂ η we mean the normal derivative at rη ∈ ∂Cr (x, ρ), and τη denotes the tangential unit vector at rη ∈ ∂Cr (x, ρ). The reason for the application of ξ∧ in (6.33) is that we can then work with the operator L∗ξ instead of ∇∗ξ . The surface curl gradient has the nice property L∗ξ Pn (ξ · η) = −L∗η Pn (ξ · η) which we have used in the seventh line of Equation (6.33). Furthermore, since ξ and ∇∗ξ are orthogonal, the convergence of (6.33) for N → ∞ also implies convergence for the same expression without the application of ξ∧. Now, we can use the scalar localization result from Proposition 6.3 to obtain

·

lim N →∞ | · |

Ωr \Cr (ρ,·)

(6.34)

˜ Sh (·,y) =Ψ N

and

lim N →∞

!

X 2n + 1  ·

=0 F (rη)dω(rη) P · η

1 n

2 4πr 2 |·| n=N +1 l (Ωr ) | {z } ⌊κN ⌋

Z

Z

Ωr \Cr (ρ,·)







X 2n + 1  ·

∗   L F (rη)dω(rη) P · η

n η 2

2 4πr 2 |·| ⌊κN ⌋

n=N +1

= 0,

(6.35)

l (Ωr )

which deals with the relevant contributions to the asymptotic behaviour of (6.32) and (6.33). It remains to investigate the boundary integrals appearing on the right hand side of (6.33). Observing the differential equation (1− t2 )Pn′′ (t)− 2tPn′ (t)+ n(n + 1)Pn (t) = 0,

28

t ∈ [−1, 1], for Legendre polynomials leads to Z 1 2n + 1 τη F2 (rη) Pn (ξ · η) dσ(rη) (6.36) r 2 ∂Cr (x,ρ) 4π Z 1 2n + 1 ∗ ∂ 1 F2 (rη) Lξ Pn (ξ · η) dσ(rη) + 2 r n(n + 1) ∂Cr (x,ρ) 4π ∂νη  Z  2n + 1 1 1 τη F2 (rη) 1 − (ξ · η)2 Pn′′ (ξ · η) Pn (ξ · η) + = 2 r ∂Cr (x,ρ) 4π n(n + 1)  2 ′ − (ξ · η)Pn (ξ · η) dσ(rη) n(n + 1) =0. Combining (6.31)–(6.36) implies the desired property (6.30). Theorem 6.4. Assume that parameters αN,n , α ˜ N,n , and βN are positive and suppose that, for some δ > 0 and κ > 1, 2(N+1)

βN inf

1−( R r)

1−( R r)

+ (⌊κN ⌋ + 1)2

2

αN,n

n=0,...,⌊κN ⌋

  = O N −2(1+δ) ,

for N → ∞.

(6.37)

An analogous relation shall hold true for α ˜ N,n . Additionally, let every N be associated with an ε1 = ε1 (N ) > 0 and an ε2 = ε2 (N ) > 0 such that  N R = lim ε2 N = 0. (6.38) lim ε1 N →∞ N →∞ r

The functions f1 : ΩR → R3 and f2 : Γr → R3 , r < R, are supposed to be such that a unique solution b of (6.1)–(6.4) exists and that the restriction b+ is of class hs (Ωr ), for some fixed s ≥ 2. The erroneous input data is given by f1ε1 = f1 + ε1 e1 and f2ε2 = f2 + ε1 e2 , with e1 ∈ l2 (ΩR ) and e2 ∈ l2 (Ωr ). If the kernels ΦN ∈ PolN and the ˜ N ∈ Pol⌊κN ⌋ are the minimizers of the functional F from (6.23) and if wavelet kernel Ψ bεN is given as in (6.21), then

(6.39) lim b+ − bεN l2 (Γ˜ r ) = 0. N →∞

Lemma 6.5. Assume that the parameters αN,n , α ˜ N,n , and βN are positive and suppose that, for some δ > 0 and κ > 1, 2(N+1)

βN inf

n=0,...,⌊κN ⌋

1−( R r) 1−(

R 2 r

)

+ (⌊κN ⌋ + 1)2 αN,n

  = O N −2(1+δ) ,

for N → ∞.

(6.40)

An analogous relation shall hold true for α ˜ N,n . If the scaling kernel ΦN ∈ PolN and the ˜ wavelet kernel ΨN ∈ Pol⌊κN ⌋ are the minimizers of the functional F from (6.23), then

˜ N 2 2

Ψ L ([−1,1−ρ]) lim lim lim = 0. (6.41)

2 N →∞ N →∞ N →∞ Ψ ˜ N 2 L ([−1,1]) 29

˜ ˜ N (x, y)|2 /kΨ ˜ N k2 2 Proof. Set FN (t) = |Ψ L ([−1,1]) , where t = ξ · η. Since ΨN is tensorzonal, FN is well-defined and can be regarded as a density function of a random variable t ∈ [−1, 1]. From here on, the proof is essentially the same as for Lemma 4.5 and we have to show limN →∞ EN (t) = 1 and limN →∞ VN (t) = 0. We just indicate the proof for ˜ N (t) = |Ψ ˜ N (x, y)|, again with EN (t), the case of VN (t) follows analogously. Setting Ψ t = ξ · η, we get Z Z 1 2 ˜ 2 2 ˜ (6.42) (rξ, rη) ξ · η Ψ 2πr t|ΨN (t)| dt = dω(rη) N −1

=

1 r2

Ωr

2 ⌊κN X X⌋ 2n+1 X ⌊κN X⌋ 2m+1 X i=1 n=0i k=1 m=0i l=1

˜ ∧ (n)Ψ ˜ ∧ (m)y (i) (ξ) · y (i) (ξ) Ψ N N n,k m,l ×

⌊κN ⌋ 2n+1 ⌊κN ⌋ 2m+1

=

Z

(i)

Ωr

(i)

(ξ · η) yn,k (η) · ym,l (η) dω(rη)

1 X X X X ˜∧ ˜ ∧ (m)Yn,k (ξ) Ym,l (ξ) ΨN (n)Ψ N r 2 n=0 k=1 m=0 l=1 Z (ξ · η) Yn,k (η) Ym,l (η) dω(rη) × Ωr

⌊κN ⌋ 2n+1 ⌊κN ⌋ 2m+1

+

˜ ∧ (n)Ψ ˜ ∧ (m) 1 X X X X Ψ N N ∇∗ Yn,k (ξ) · ∇∗ξ Ym,l (ξ) r2 n(n + 1)m(m + 1) ξ n=1 k=1 m=1 l=1 Z  η ∇∗η Yn,k (η) · ∇∗η Ym,l (η) dω(rη) ×ξ· Ωr

The first term on the right hand side of (6.42) can be written as a one-dimensional integral ⌊κN ⌋ 2n+1 ⌊κN ⌋ 2m+1 1 X X X X ˜∧ ˜∧ ΨN (n)Ψ N (m)Yn,k (ξ) Ym,l (ξ) r2 n=0 k=1 m=0 l=1 Z (ξ · η) Yn,k (η) Ym,l (η) dω(rη) ×

(6.43)

Ωr

⌊κN ⌋ ⌊κN ⌋

=

Z 1 X X (2n + 1)(2m + 1) ˜ ∧ (n)Ψ ˜ ∧ (m) tPn (t) Pm (t) dt. Ψ N N 8π −1 n=0 m=0

The second term requires significantly more effort. We start by observing that Z  η ∇∗η Yn,k (η) · ∇∗η Ym,l (η) dω(rη) (6.44) Ωr Z   1 =− Ym,l (η) ∇∗η · η ⊗ ∇∗η Yn,k (η) + Yn,k (η) ∇∗η · η ⊗ ∇∗η Ym,l (η) dω(rη) 2 Ωr 30

Z Z 1 n(n + 1) ∇∗η (Yn,k (η) Ym,l (η)) dω(rη) + ηYn,k (η) Ym,l (η) dω(rη) 2 Ωr 2 Ωr Z m(m + 1) ηYn,k (η) Ym,l (η) dω(rη) + 2 Ωr Z Z n(n + 1) m(m + 1) = ηYn,k (η) Ym,l (η) dω(rη) + ηYn,k (η) Ym,l (η) dω(rη), 2 2 Ωr Ωr  where we have used ∇∗η · η ⊗ ∇∗η Yn,k (η) = η∆∗η Yn,k (η) + ∇∗η Yn,k (η). Plugging (6.44) into the second term on the right hand side of Equation (6.42), and using the expression 2∇∗ξ Yn,k (ξ) · ∇∗ξ Ym,l (ξ) = ∆∗ξ (Yn,k (ξ)Ym,l (ξ)) + (n(n + 1) + m(m + 1))Yn,k (ξ)Ym,l (ξ), leads to =−

⌊κN ⌋ 2n+1 ⌊κN ⌋ 2m+1 ˜ ∧ (n)Ψ ˜ ∧ (m) 1 X X X X Ψ N N ∇∗ Yn,k (ξ) · ∇∗ξ Ym,l (ξ) r2 n(n + 1)m(m + 1) ξ n=1 k=1 m=1 l=1 Z η∇∗η Yn,k (η) · ∇∗η Ym,l (η) dω(rη) ×ξ·

(6.45)

Ωr

⌊κN ⌋ 2n+1 ⌊κN ⌋ 2m+1

=

1 X X X X ˜∧ ˜ ∧ (m) n(n + 1) ΨN (n)Ψ N 2 2r n=1 m(m + 1) k=1 m=1 l=1 Z (ξ · η) Yn,k (ξ) Ym,l (ξ) Yn,k (η) Ym,l (η) dω(rη) × Ωr

⌊κN ⌋ 2n+1 ⌊κN ⌋ 2m+1

+

1 X X X X ˜∧ ˜ ∧ (m) ΨN (n)Ψ N 2r 2 n=1 k=1 m=1 l=1 Z (ξ · η) Yn,k (ξ) Ym,l (ξ) Yn,k (η) Ym,l (η) dω(rη) × Ωr

⌊κN ⌋ 2n+1 ⌊κN ⌋ 2m+1

+

˜ ∧ (n)Ψ ˜ ∧ (m) 1 X X X X Ψ N N 2r 2 n(n + 1) n=1 k=1 m=1 l=1 Z ∗ × ξ · ∆ξ ηYn,k (ξ) Ym,l (ξ) Yn,k (η) Ym,l (η) dω(rη) Ωr

⌊κN ⌋ ⌊κN ⌋

=

X X

n=1 m=1

˜ ∧ (n)Ψ ˜ ∧ (m) Ψ N N



1 n(n + 1) − 2 + 2 2m(m + 1)



(2n + 1)(2m + 1) 8π

Z

1

tPn (t)Pm (t)dt, −1

where we have used the addition theorem, the property ∆∗ξ Pn (ξ·η) = ∆∗η Pn (ξ·η), Green’s formulas, and ∆∗η η = −2η in the last row. Eventually, combining (6.42), (6.43), and

31

(6.45), we are lead to Z

1

˜ N (t)|2 dt t|Ψ

(6.46)

−1 ⌊κN ⌋ ⌊κN ⌋

=

X X

˜ ∧ (n)Ψ ˜ ∧ (m) Ψ N N

n=1 m=1



3 n(n + 1) − 2 + 4 4m(m + 1)



(2n + 1)(2m + 1) 8π 2 r 2

Z

1

tPn (t)Pm (t)dt. −1

Observing limm,n→∞ 34 + n(n+1)−2 4m(m+1) = 1 if m ∈ {n − 1, n, n + 1}, we can proceed with (6.46) in a similar manner as in (4.31) and (4.32). Together with ⌊κN ⌋ X 2n + 1 X 2n + 1 ∧ 2 −2δ 2 ˜ ˜∧ | Ψ (n)| = O(N ) + |Ψ N N (n)| , 8π 2 r 2 8π 2 r 2

⌊κN ⌋

˜ N k2 2 kΨ L ([−1,1]) =

(6.47)

n=N +1

n=0

this leads to the desired property lim EN (t) = lim

N →∞

Z

1

N →∞ −1

R1

2 ˜ −1 t|ΨN (t)| dt ˜ N k2 2 N →∞ kΨ L ([−1,1])

t|FN (t)|2 dt = lim

= 1,

(6.48)

concluding the proof. Acknowledgements. This work was partly conducted at UNSW Australia and supported by a fellowship within the Postdoc-program of the German Academic Exchange Service (DAAD). The author thanks Ian Sloan, Rob Womersley, and Yu Guang Wang for valuable discussions on Proposition 4.2.

References [1] G. Backus, R. Parker, C. Constable. Foundations of Geomagnetism. Cambridge University Press, 1996. [2] F. Bauer, M. Gutting, M.A. Lukas. Evaluation of parameter choice methods for regularization of ill-posed problems in geomathematics. In: W. Freeden, Z. Nashed, T. Sonar (eds.), Handbook of Geomathematics, 2nd Ed., Springer, to appear. [3] M. Bayer, W. Freeden, T. Maier. A Vector Wavelet Approach in Iono- and Magnetospheric Geomagnetic Satellite Data. J. Atm. Solar-Terr. Phys. 63, 581-597, 2001. [4] F. Boschetti, P. Hornby, F.G. Horowitz. Wavelet based inversion of gravity data. Explor. Geophys. 32, 48-55, 2001. [5] A. Chambodut I. Panet, M. Mandea, M. Diament, M. Holschneider, O. James. Wavelet frames: an alternative to spherical harmonic representation of potential fields. Geophys. J. Int. 163, 875-899, 2005.

32

[6] G. Cooper. The stable downward continuation of potential field data. Explor. Geophys. 35, 260-265, 2004. [7] S. Dahlke, W. Dahmen, E. Schmitt, I. Weinreich. Multiresolution analysis and wavelets on S 2 and S 3 . Num. Funct. Anal. Opt. 16, 19-41, 1995. [8] J.R. Driscoll, M.H. Healy Jr. Computing Fourier Transforms and Convolutions on the 2-Sphere. Adv. Appl. Math. 15, 202-250, 1994. [9] D. Fischer, V. Michel. Sparse regularization of inverse gravimetry - case study: spatial and temporal mass variations in South America. Inverse Problems 28, 065012, 2012. [10] D. Fischer, V. Michel. Automatic best-basis selection for geophysical tomographic inverse problems. Geophys. J. Int. 193, 1291-1299, 2013. [11] W. Freeden. On Approximation by Harmonic Splines. Manusc. Geod. 6, 193-244, 1981. [12] W. Freeden. Multiscale Modelling of Spaceborne Geodata. Teubner, 1999. [13] W. Freeden, C. Gerhards. Poloidal and Toroidal Field Modeling in Terms of Locally Supported Vector Wavelets. Math. Geosc. 42, 817-838, 2010. [14] W. Freeden, S. Pereverzev. Spherical Tikhonov Regularization Wavelets in Satellite Gravity Gradiometry with Random Noise. J. Geod. 74, 730-736, 2001. [15] W. Freeden, F. Schneider. Regularization Wavelets and Multiresolution. Inverse Problems 14, 225-243, 1998. [16] W. Freeden, M. Schreiner. Local Multiscale Modeling of Geoidal Undulations from Deflections of the Vertical. J. Geod. 78, 641-651, 2006. [17] W. Freeden, M. Schreiner. Spherical Functions of Mathematical Geosciences: A Scalar, Vectorial, and Tensorial Setup. Springer, 2009. [18] W. Freeden, U. Windheuser. Combined Spherical Harmonics and Wavelet Expansion - A Future Concept in Earth’s Gravitational Potential Determination. Appl. Comp. Harm. Anal. 4, 1-37, 1997. [19] E. Friis-Christensen, H. L¨ uhr, G. Hulot. Swarm: A constellation to study the Earth’s magnetic field. Earth Planets Space 58, 351-358, 2006. [20] C. Gerhards. Spherical decompositions in a global and local framework: theory and an application to geomagnetic modeling. GEM - Int. J. Geomath. 1, 205-256, 2011. [21] C. Gerhards. Locally Supported Wavelets for the Separation of Spherical Vector Fields with Respect to their Sources. Int. J. Wavel. Multires. Inf. Process. 10, 1250034, 2012. 33

[22] C. Gerhards. Multiscale Modeling of the Geomagnetic Field and Ionospheric Currents. In: W. Freeden, Z. Nashed, T. Sonar (eds.), Handbook of Geomathematics, 2nd Ed., Springer, to appear. [23] D. Gubbins, D. Ivers, S.M. Masterton, D. Winch. Analysis of Lithospheric Magnetization in Vector Spherical Harmonics. Geophys. J. Int. 187, 99-117, 2011. [24] G.V. Haines. Spherical Cap Harmonic Analysis. J. Geophys. Res. 90, 2583-2591, 1985. [25] K. Hesse, R. S. Womersley. Numerical integration with polynomial exactness over a spherical cap. Adv. Comp. Math. 36, 451-483, 2012. [26] M. Holschneider. Continuous wavelet transforms on the sphere. J. Math. Phys. 37, 4156-4165, 1996. [27] M. Holschneider, A. Chambodut, M. Mandea. From Global to Regional Analysis of the Magnetic Field on the Sphere Using Wavelet Frames. Phys. Earth Planet. Int. 135, 107-124, 2003. [28] R. Klees, T. Wittwer. Local Gravity Field Modeling with Multipole Wavelets, in: P. Tregoning, C. Rizos (eds.) Dynamic Planet, International Association of Geodesy Symposia 130, Springer, 2007. [29] S. Lu, S. Pereverzev. Multiparameter Regularization in Downward Continuation of Satellite Data. In: W. Freeden, Z. Nashed, T. Sonar (eds.), Handbook of Geomathematics, Springer, 2010. [30] T. Maier. Wavelet-Mie-Representations for Solenoidal Vector Fields with Applications to Ionospheric Geomagnetic Data. SIAM J. Appl. Math. 65, 1888-1912, 2005. [31] T. Maier, C. Mayer. Multiscale Downward Continuation of CHAMP FGM-Data for Crustal Field Modelling. In: C. Reigber, H. L¨ uhr, P. Schwintzer (eds.), First CHAMP Mission Results for Gravity, Magnetic and Atmospheric Studies, Springer, 2003. [32] S.G. Mallat, Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41, 3397-3415, 1993. [33] S. Maus, F. Yin, H. L¨ uhr, C. Manoj, M. Rother, J. Rauberg, I. Michaelis, C. Stolle, R.D. M¨ uller. Resolution of direction of oceanic magnetic lineations by the sixth-generation lithospheric magnetic field model from CHAMP satellite magnetic measurements. Geochem. Geophys. Geosys. 9, DOI: 10.1029/2008GC001949, 2008. [34] C. Mayer, T. Maier. Separating Inner and Outer Earth’s Magnetic Field from CHAMP Satellite Measurements by Means of Vector Scaling Functions and Wavelets. Geophys. J. Int. 167, 1188-1203, 2006.

34

[35] V. Michel. Scale continuous, Scale discretized and scale discrete harmonic wavelets for the outer and the inner space of a sphere and their application to an inverse problem in geomathematics. Appl. Comp. Harm. Anal. 12, 77-99, 2002. [36] V. Michel. Optimally localized approximate identities on the 2-sphere. Num. Funct. Anal. Opt. 32, 877-903, 2011. [37] N.K. Pavlis, S.A. Holmes, S.C. Kenyon, J.K. Factor. The development and evaluation of the Earth Gravitational Model 2008 (EGM2008). J. Geophys. Res. 117, B04406, 2012. [38] S. Pereverzev, E. Schock. Error estimates for band-limited spherical regularization wavelets in an inverse problem of satellite geodesy. Inverse Problems 15, 881-890, 1999. [39] A. Plattner, F.J. Simons. Spatiospectral concentration of vector fields on a sphere. Appl. Comp. Harm. Anal., DOI: 10.1016/j.acha.2012.12.001, 2013. [40] A. Plattner, F.J. Simons. Potential-field estimation using scalar and vector Slepian functions at satellite altitude. In: W. Freeden, Z. Nashed, T. Sonar (eds.), Handbook of Geomathematics, 2nd Ed., Springer, to appear. [41] P. Schr¨ oder, W. Sweldens. Spherical Wavelets: Efciently Representing Functions on the Sphere. In: S.G. Mair, R. Cook (eds.), Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, 1995. [42] L. Shure, R.L. Parker, G.E. Backus. Harmonic Splines for Geomagnetic Modeling. Phys. Earth Planet. Int. 28, 215-229, 1982. [43] F.J. Simons, F. A. Dahlen, M. A. Wieczorek. Spatiospectral localization on a sphere. SIAM Review 48, 504-536, 2006. [44] F. J. Simons, A. Plattner. Scalar and Vector Slepian Functions, Spherical Signal Estimation and Spectral Analysis. In: W. Freeden, Z. Nashed, T. Sonar (eds.), Handbook of Geomathematics, 2nd Ed., Springer, to appear. [45] E. Thebault. Global lithospheric magnetic field modelling by successive regional analysis. Earth Planets Space 58, 485-495, 2006. [46] E. Thebault, J.J. Schott, M. Mandea. Revised Spherical Cap Harmonic Analysis (R-SCHA): Validation and Properties. J. Geophys. Res. 111, B01102, 2006. [47] H. Trompat, F. Boschetti. P. Hornby. Improved downward continuation of potential field data. Explor. Geophys. 34, 249-256, 2003. [48] I.N. Tziavos, V.D. Andritsanos, R. Forsberg, A.V. Olesen. Numerical investigation of downward continuation methods for airborne gravity data. In: C. Jekeli, L. Bastos, J. Fernandes (eds.), Gravity, Geoid and Space Missions - GGSM 2004 IAG International Symposium Porto, Springer, 2005. 35

[49] Y.G. Wang, I.H. Sloan, R. Womersley. Dirichlet Kernel for the Sphere – Estimates of Riemann Localization. Preprint, 2013. [50] Y.G. Wang, I.H. Sloan, R. Womersley. Local Decay of Filtered Polynomial Kernels – The Dependence on Filtered Smoothness. Preprint, 2013.

36

Suggest Documents