Jul 4, 2017 - expansion coefficients for a large class of functions with interior and endpoint singularities. ... (ii) Br is the anisotropic (or non-uniformly) Jacobi weighted Sobolev space (cf. ... }1/2 . (1.2). Compared with the standard Sobolev space in (i), this .... We derive the main results in Sections 4 and 6, and improve the ...
arXiv:1707.00840v2 [math.NA] 5 Jul 2017
NEW AND SHARP BOUNDS FOR ORTHOGONAL POLYNOMIAL APPROXIMATION OF FUNCTIONS IN FRACTIONAL SOBOLEV-TYPE SPACES I: CHEBYSHEV CASE
WENJIE LIU1,2 ,
LI-LIAN WANG2
AND HUIYUAN LI3
Abstract. We introduce fractional Sobolev-type spaces involving Riemann-Liouville (RL) fractional integrals/derivatives to characterise the regularity of singular functions (or functions with limited regularity). These spaces are naturally arisen from exact representations of Chebyshev expansion coefficients, which are derived from fractional integration by parts (under the weakest possible conditions), and expressed in terms of generalised Gegenbauer functions of fractional degree (GGF-Fs). Under this framework, we are able to obtain the optimal decay rate of Chebyshev expansion coefficients for a large class of functions with interior and endpoint singularities. We can also derive new and sharp error estimates for Chebyshev spectral expansions and the related interpolation and quadrature. As by-product of the main result, we improve the existing bounds in usual Sobolev spaces of integer-order derivatives.
1. Introduction This paper is the first work of a series on orthogonal polynomial approximation of functions with limited regularity and interior/endpoint singularities in fractional Sobolev-type spaces. It is known that polynomial approximation is of fundamental importance for algorithm development and numerical analysis of many computational methods, e.g., p/hp finite elements or spectral/spectralelement methods (see, e.g., [20, 40, 11, 16, 27, 41]). Many results therein are typically of the form kQN u − ukBl ≤ cN −σ |u|Br ,
σ ≥ 0,
(1.1)
where QN is an orthogonal projection (or interpolation operator) upon the set of all polynomials of degree at most N, and c is a positive constant independent of N and u. Here, Bl is a certain Sobolev space, and Br is a related Sobolev or Besov space. In theory, one expects (a) the space Br should contain as many classes of functions as possible; and (b) can best characterise the regularity of the underlying functions resulted in optimal order of convergence O(N −σ ). In general, the following spaces are used for Br . (i) Br is the standard Hωm -Sobolev space with integer m and usually ω being the weight function under which the polynomials are orthogonal (cf. [11, 16, 27]). However, this type of space could not lead to optimal order for functions with endpoint singularities (cf. [26]). (ii) Br is the anisotropic (or non-uniformly) Jacobi weighted Sobolev space (cf. [20, 8, 9, 35, 26, 24, 41, 53]). For example, in the Gegenbauer approximation, the space Br is H m,β (Ω) with integer m ≥ 0, β > −1 and Ω := (−1, 1), defined as a closure of C ∞ -functions and endowed 2000 Mathematics Subject Classification. 41A10, 41A25, 41A50, 65N35, 65M60. Key words and phrases. Approximation by Chebyshev polynomials, fractional integrals/derivatives, fractional Sobolev-type spaces, singular functions, optimal estimates. 1 Department of Mathematics, Harbin Institute of Technology, 150001, China. 2 Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, 637371, Singapore. The research of this author is partially supported by Singapore MOE AcRF Tier 1 Grant (RG 27/15), and Singapore MOE AcRF Tier 2 Grant (MOE 2013-T2-1-095, ARC 44/13). 3 State Key Laboratory of Computer Science/Laboratory of Parallel Computing, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China. The research of this author is partially supported by the National Natural Science Foundation of China (91130014, 11471312 and 91430216). 1
2
W. LIU, L. WANG & H. LI
with the weighted norm kukH m,β (Ω) =
X m Z k=0
1
|u(k) (x)|2 (1 − x2 )β+k dx
1/2 .
(1.2)
−1
Compared with the standard Sobolev space in (i), this framework can better describe endpoint singularities. However, for (1 + x)γ -type singular function (with γ being a positive non-integer), the estimate (1.1) is not optimal for this type of corner singularities. (iii) In a series of works [8, 9, 10], Babuˇska and Guo introduced the Jacobi-weighted Besov space defined by space interpolation and the so-called K-method to treat e.g., (1 + x)γ -type s,β (Ω) = (H l,β (Ω), H m,β (Ω))θ,2 with integers singularities. For instance, the Besov space B2,2 l < m and s = (1 − θ)l + θm, equipped with the norm Z ∞ 1 dt 2 t−2θ |K(t, u)|2 kukBs,β (Ω) = , K(t, u) = inf kvkH l,β (Ω) + tkwkH m,β (Ω) , (1.3) 2,2 u=v+w t 0 To deal with (1+x)γ logν (1+x)-type corner singularities, Babuˇska and Guo further modified the K-method by incorporating a log-factor into the norm. It is noteworthy that the Besov framework played a fundamental role in the numerical analysis of problems with corner singularities, which was a topic of long-time interest (cf. [6, 47, 48, 7, 23, 46, 40, 28]). (iv) As with the Fourier approximation, Br can be a fractional Sobolev space related to the decay rate of the expansion coefficients (cf. [51, 17, 21]). In practice, one needs to know about the expansion coefficients to find the space which the underlying function belongs to. (v) Most of the aforementioned spaces are L2 -type spaces, but the related estimates cannot lead to optimal order of convergence for some functions with interior singularities. For example, consider u(x) = |x| and note that u00 = 2δ(x) ∈ L1 (Ω), but u00 6∈ L2 (Ω), which might lose one order. As shown in [43, Theorems 4.2-4.3] (also see Lemma 5.1), the optimal order can be achieved in the space as a subspace of L1 (Ω) (cf. (5.1)). We refer to e.g., [43, 52, 44, 32] for more details. However, it should be pointed out that the Sobolev space Br therein was still involved integer-order derivatives, which is not the best to describe the regularity of e.g., u(x) = |x|1+γ with γ ∈ (0, 1). In this paper, we introduce a new type of fractional Sobolev spaces that are naturally derived from the exact representation of the Chebyshev expansion coefficient involving RL fractional integrals/derivatives, and the so-called GGF-Fs (as a generalisation of Gegenbauer polynomials). We also show that this framework can meet two requirements (a)-(b) mentioned earlier. It can handle both interior singularities (as a nontrivial fractional extension of the framework in (v)), and endpoint singularities (as an alternative to the Besov-framework in (iii) for dealing with corner singularities through more explicit norms). Here, we focus on Chebyshev approximation and stress the analysis of the expansion coefficients. Indeed, as highlighted in [32], the error estimates of Chebyshev spectral expansions and the related interpolation and quadrature all essentially depend on the decay rate of expansion coefficients. The main contributions of this paper reside in the following aspects. (1) Under this new framework, we derive optimal bounds for Chebyshev approximation. for a large class of singular functions with interior and endpoint singularities. This can be extended to the approximation by Jacobi polynomials. (2) Taking the fractional order s → 1, we improve the existing bounds (see [43, 52, 44, 32]) in several senses (cf. Section 5). (3) With the exact formulas at our disposal (cf. Theorem 4.1), we obtain analytic formulas for the Chebyshev coefficients of |x − θ|α -type and |x − θ|α log |x − θ|-type singular function with θ ∈ [−1, 1] (see Subsection 4.4 and Section 6). Some are new and difficult to be derived by some existing means (cf. [47, 14, 13, 49]). The paper is organised as follows. In Sections 2-3, we introduce the GGF-Fs, and present some important properties, including the uniform bounds and RL fractional integral/derivative formulas.
APPROXIMATION BY CHEBYSHEV EXPANSIONS
3
We derive the main results in Sections 4 and 6, and improve the existing estimates in Sobolev spaces with integer-order derivatives in Section 5. 2. Generalised Gegenbauer functions of fractional degree In this section, we collect some relevant properties of the hypergeometric functions and Gegenbauer polynomials, upon which we define the GGF-Fs and derive their properties. As we shall see, the GGF-Fs play an important part in the analysis of this work. 2.1. Hypergeometric functions and Gegenbauer polynomials. Let Z and R be the sets of all integers and real numbers, respectively. Denote by + N = k ∈ Z : k ≥ 1 , N0 := {0} ∪ N, R+ := a ∈ R : a > 0 , R+ (2.1) 0 := {0} ∪ R . For a ∈ R, the rising factorial in the Pochhammer symbol is defined by (a)0 = 1;
(a)j = a(a + 1) · · · (a + j − 1), ∀ j ∈ N.
(2.2)
The hypergeometric function is a power series, defined by (cf. [1]) 2 F1 (a, b; c; z)
=
∞ ∞ X X a(a + 1) · · · (a + j − 1) b(b + 1) · · · (b + j − 1) j (a)j (b)j z j =1+ z , (c)j j! 1 · 2···j c(c + 1) · · · (c + j − 1) j=1 j=0
(2.3)
where a, b, c ∈ R and −c 6∈ N0 . Note that it converges absolutely for all |z| < 1, and apparently, 2 F1 (a, b; c; 0)
= 1,
2 F1 (a, b; c; z)
= 2 F1 (b, a; c; z).
(2.4)
If a = −n, n ∈ N0 , (a)j = 0, j ≥ n + 1, so 2 F1 (−n, b; c; x) reduces to a polynomial of degree ≤ n. The following properties can be found in [5, Ch. 2], if not stated otherwise. • If c − a − b > 0, the series (2.3) converges absolutely at z = ±1, and 2 F1 (a, b; c; 1)
=
Γ(c)Γ(c − a − b) . Γ(c − a)Γ(c − b)
(2.5)
Here, the Gamma function with negative non-integer arguments should be understood by the Euler’s reflection formula (cf. [1]): π , a 6∈ Z. (2.6) Γ(1 − a)Γ(a) = sin(πa) Note that Γ(−a) = ∞, if −a ∈ N. • If −1 < c − a − b ≤ 0, the series (2.3) converges conditionally at z = −1, but diverges at z = 1; while for c − (a + b) ≤ −1, it diverges at z = ±1. In fact, it has the following singular behaviours at z = 1 : Γ(c) 2 F1 (a, b; c; z) lim = , if c = a + b, (2.7) Γ(a)Γ(b) z→1− − ln(1 − z) and
Γ(c)Γ(a + b − c) , Γ(a)Γ(b) z→1 Recall the transform identity: for a, b, c ∈ R and −c 6∈ N0 , lim−
2 F1 (a, b; c; z) (1 − z)c−a−b
2 F1 (a, b; c; z)
=
if c < a + b.
= (1 − z)c−a−b 2 F1 (c − a, c − b; c; z),
|z| < 1.
The hypergeometric function satisfies the differential equation (cf. [5, (P. 98)]): c 0 z (1 − z)a+b−c+1 y 0 (z) = abz c−1 (1 − z)a+b−c y(z), y(z) = 2 F1 (a, b; c; z). We shall use the value at z = 1/2 (cf. [37, (15.4.28)]): √ a + b + 1 1 π Γ((a + b + 1)/2) F a, b; ; = . 2 1 2 2 Γ((a + 1)/2)Γ((b + 1)/2)
(2.8)
(2.9)
(2.10)
(2.11)
4
W. LIU, L. WANG & H. LI
Many functions are associated with the hypergeometric function. For example, the Jacobi polynomial with α, β > −1 (cf. Szeg¨ o [42]) is defined by (α + 1)n 1 − x Pn(α,β) (x) = 2 F1 − n, n + α + β + 1; α + 1; n! 2 (2.12) (β + 1) 1 + x n n = (−1) , n ∈ N0 , 2 F1 − n, n + α + β + 1; β + 1; n! 2 for x ∈ (−1, 1), which satisfies (α + 1)n . (2.13) n! For α, β > −1, the Jacobi polynomials are orthogonal with respect to the Jacobi weight function: ω (α,β) (x) = (1 − x)α (1 + x)β , namely, Z 1 (α,β) (2.14) Pn(α,β) (x)Pn0 (x)ω (α,β) (x) dx = γn(α,β) δnn0 , Pn(α,β) (−x) = (−1)n Pn(β,α) (x),
Pn(α,β) (1) =
−1
where δnn0 is the Dirac Delta symbol, and γn(α,β) =
2α+β+1 Γ(n + α + 1)Γ(n + β + 1) . (2n + α + β + 1)n! Γ(n + α + β + 1)
(2.15)
In fact, the definition (2.12) is valid for all α, β ∈ R, and (2.13) and many others still hold, but the orthogonality is lacking in general (cf. Szeg¨o [42, P. 63-67]). Throughout this paper, the Gegenbauer polynomial with λ > −1/2 is defined by (λ−1/2,λ−1/2) (x) Pn 1 1 − x G(λ) = F − n, n + 2λ; λ + ; 2 1 n (x) = (λ−1/2,λ−1/2) 2 2 (1) Pn (2.16) 1 1+x n = (−1) 2 F1 − n, n + 2λ; λ + ; , x ∈ (−1, 1), 2 2 which has a normalization different from that in Szeg¨o [42]. Note that the Chebyshev polynomial is 1 1 − x ; = cos(n arccos(x)). (2.17) Tn (x) = G(0) (x) = F − n, n; 2 1 n 2 2 Under this normalization, we derive from (2.14)-(2.15) the orthogonality: Z 1 22λ−1 Γ2 (λ + 1/2) n! (λ) (λ) (λ) , (2.18) G(λ) n (x)Gm (x) ωλ (x) dx = γn δnm ; γn = (n + λ)Γ(n + 2λ) −1 where ωλ (x) = (1 − x2 )λ−1/2 . In the analysis, we shall use the derivative relation derived from the generalised Rodrigues’ formula (see [42, (4.10.1)] with α = β = λ − 1/2 > −1 and m = 1): 1 d (λ+1) (2.19) ωλ (x)G(λ) ωλ+1 (x)Gn−1 (x) , n ≥ 1. n (x) = − 2λ + 1 dx 2.2. Generalised Gegenbauer functions of fractional degree. Define the GGF-Fs by allowing n in (2.16) to be real as follows. Definition 2.1. For real λ > −1/2 and real ν ≥ 0, the right GGF-F of degree ν is defined by ∞ X 1 1 − x (−ν)j (ν + 2λ)j 1 − x j r (λ) Gν (x) = 2 F1 − ν, ν + 2λ; λ + ; =1+ , (2.20) 2 2 j! (λ + 1/2)j 2 j=1 for x ∈ (1, 1); while the left GGF-F of degree ν is defined by 1 1 + x l (λ) Gν (x) =(−1)[ν] 2 F1 − ν, ν + 2λ; λ + ; 2 2 ∞ j X (−ν) (ν + 2λ) j j 1+x = (−1)[ν] 1 + , j! (λ + 1/2)j 2 j=1
(2.21) x ∈ (−1, 1),
APPROXIMATION BY CHEBYSHEV EXPANSIONS
where [ν] is the largest integer ≤ ν.
5
Remark 2.1. (i) For λ = 1/2, the right GGF-F turns to be the Legendre function (cf. [1]): Pν (x) = r (1/2) Gν (x). (λ) (ii) In Handbook [37, (15.9.15)], rGν (x) (with a different normalisation) is defined as the Gegenbauer function. However, there is no much discussion on its properties. (iii) Mirevski et al [33] defined the (generalised or) g-Jacobi function through the (fractional) Rodrigues’ formula and derived an equivalent representation in terms of the hypergeometric (λ) function. The right GGF-F rGν (x) coincides with the g-Jacobi function with α = β = λ − 1/2 > −1 [33, Thm 12]. However, there are some deficiencies in the definition of gfunctions and the proof of the important formula in [33, Thm 10], where the fractional ν R ν derivative operator R 0 Dx (cf. (3.4)) should be corrected to x D1 in Definition 9, and the condition α > −1 in [33, (6)] is violated in the derivation of [33, (9),(13)-(14)]. Observe from (2.16) and Definition 2.1 that the GGF-Fs reduce to usual Gegenbauer polynomials when ν ∈ N0 , but they are non-polynomials when ν is not an integer. Proposition 2.1. The GGF-Fs defined in Definition 2.1 satisfy r
l (λ) (λ) G(λ) n (x) = Gn (x) = Gn (x),
r
[ν] l (λ) G(λ) Gν (x), ν (−x) = (−1)
r
G(λ) ν (1) = 1,
n ∈ N0 ; l
[ν] G(λ) ν (−1) = (−1) .
(2.22a) (2.22b)
(λ)
It is seen from (2.27a) that for −1/2 < λ < 1/2, rGn−λ∗ (x) → 0 as x → −1+ , while it diverges at x = −1 for λ > 1/2. As stated below, the GGF-Fs at x = ±1 behave differently when the parameter λ is in different range. Proposition 2.2. Let ν ∈ R+ 0. (i) If −1/2 < λ < 1/2 and ν + λ − 1/2 6∈ N0 , then r
G(λ) ν (−1) =
cos((ν + λ)π) = (−1)[ν] l G(λ) ν (1) . cos(λπ)
(2.23)
(ii) If λ = 1/2 and ν 6∈ N0 , then x→−1+
(λ)
(λ)
Gν (x) sin(νπ) (−1)[ν] l Gν (x) = = lim . ln(1 + x) π ln(1 − x) x→1− r
lim
(iii) If λ > 1/2 and ν 6∈ N0 , then 1 + x λ−1/2 sin(νπ) Γ(λ − 1/2)Γ(λ + 1/2)Γ(ν + 1) r (λ) lim + Gν (x) = − 2 π Γ(ν + 2λ) x→−1 1 − x λ−1/2 l (λ) = (−1)[ν] lim− Gν (x). 2 x→1
(2.24)
(2.25)
(λ)
Proof. Thanks to (2.22b), it suffices to prove the results for rGν (x). (i) By (2.5), (2.6) and (2.20), r
Γ(λ + 1/2)Γ(1/2 − λ) Γ(ν + λ + 1/2)Γ(−ν − λ + 1/2) π sin((ν + λ + 1/2)π) cos((ν + λ)π) = = , sin((λ + 1/2)π) π cos(λπ)
G(λ) ν (−1) = 2 F1 (−ν, ν + 2λ; λ + 1/2; 1) =
(2.26)
which yields (2.23). (ii) Using (2.6), (2.7) and (2.20), and noting that ln((1 + x)/2)/ ln(1 + x) → 1 as x → 1+ , we obtain (2.24).
6
W. LIU, L. WANG & H. LI
(iii) Next, taking a = −ν, b = ν + 2λ, c = λ + 1/2 and z = (1 − x)/2 in (2.9), and using (2.4), we obtain 1 1 − x 2 λ−1/2 1 1 1 1 − x = . 2 F1 − ν, ν + 2λ; λ + ; 2 F1 ν + λ + , −ν − λ + ; λ + ; 2 2 1+x 2 2 2 2 For λ > 1/2, we find from (2.5) and (2.6) that 1 1 1 sin(νπ) Γ(λ − 1/2)Γ(λ + 1/2)Γ(ν + 1) , 2 F1 ν + λ + , −ν − λ + ; λ + ; 1 = − 2 2 2 π Γ(ν + 2λ) so we derive (2.25) from (2.20) and the above.
We have the following intimately relationships between GGF-Fs and Jacobi polynomials. Proposition 2.3. For λ∗ := λ − 1/2 > −1 and n ≥ λ∗ with n ∈ N0 , we have (λ ,−λ ) Pn ∗ ∗ (x) 1 + x λ∗ r (λ) = Gn−λ∗ (x) , (λ ,−λ ) 2 Pn ∗ ∗ (1) (−λ∗ ,λ∗ )
Pn
(x)
(λ ,−λ ) Pn ∗ ∗ (1)
= (−1)[λ∗ ]
1 − x λ ∗
l
2
(λ)
Gn−λ∗ (x) .
(2.27a)
(2.27b)
Proof. Taking a = −n + λ∗ , b = n + λ∗ + 1, c = λ∗ + 1 and z = (1 − x)/2 in (2.9), we obtain from (2.4) that 1 − x 1 + x −λ∗ 1 − x (2.28) = . 2 F1 − n + λ∗ , n + λ∗ + 1; λ∗ + 1; 2 F1 − n, n + 1; λ∗ + 1; 2 2 2 By (2.12)-(2.13), (λ ,−λ ) 1 − x Pn ∗ ∗ (x) , = (λ ,−λ ) 2 F1 − n, n + 1; λ∗ + 1; 2 Pn ∗ ∗ (1) and by (2.20) (taking ν = n − λ∗ ), the hypergeometric function in the left-hand side of (2.28) equals (λ) to rGn−λ∗ (x). Thus, we derive the identity (2.27a). Thanks to (2.13) and (2.22b), the identity (2.27b) follows from (2.27a) immediately. Remark 2.2. If −1 < λ∗ = λ − 1/2 < 1, we rewrite (2.27) as (λ)
Gn−λ∗ (x) = dn,λ (1+x)−λ∗ Pn(λ∗ ,−λ∗ ) (x);
r
l
(λ)
Gn−λ∗ (x) = (−1)[λ∗ ] dn,λ (1−x)−λ∗ Pn(−λ∗ ,λ∗ ) (x), (2.29)
(λ ,−λ )
where dn,λ = 2λ∗ /Pn ∗ ∗ (1). From (2.14)-(2.15), we immediately obtain the orthogonality: Z 1 (λ) r (λ) Gn−λ∗ (x) rGm−λ∗ (x) ωλ (x) dx (note: ωλ (x) = (1 − x2 )λ∗ ) −1 (2.30) Z 1 (λ∗ ,−λ∗ ) Pn(λ∗ ,−λ∗ ) (x)Pm (x)ω (λ∗ ,−λ∗ ) dx = d2n,λ γn(λ∗ ,−λ∗ ) δmm .
= dn,λ dm,λ −1
Similarly, {
l
(λ) Gn−λ∗ (x)}
are orthogonal with respect to the Gegenbauer weight ωλ (x). (λ ,−λ )
It is important to point out that {(1 + x)−λ∗ Pn ∗ ∗ } corresponded to a special family of generalised Jacobi functions in [25]. Such types of non-polynomial basis functions were used in developing accurate spectral methods for certain fractional differential equations (cf. [54, 18]). As some illustrations, we plot in Figure 2.1 the right generalised Chebyshev/Legendre functions, (λ) (λ) (λ) i.e., rGν (x) with λ = 0, 1/2. Note that the left counterparts l Gν (x) = (−1)[ν] rGν (−x) (cf. (1/2) (2.22b)). Observe that in the Legendre case (right), rGν (x) with non-integer degree has a logarithmic singularity at x = −1 (cf. (2.24)), while the generalised Chebyshev functions (left) are well defined at x = −1.
APPROXIMATION BY CHEBYSHEV EXPANSIONS
7
1 1.5 0.5
ν =5 ν =5.2 ν =5.5 ν =5.8 ν =6
1 0.5
0
0
ν =5 ν =5.2 ν =5.5 ν =5.8 ν =6
-0.5
-1
-1
-0.5
0 x
-0.5 -1
0.5
1
-1
-0.5
0 x
0.5
1
(λ)
Figure 2.1. Graphs of rGν (x) with λ = 0 (left) and λ = 1/2 (right) for various ν. 2.3. Uniform upper bounds. The bounds stated in the following two theorems are indispensable for the forthcoming error analysis. Theorem 2.1. For real λ ≥ 1, we have l (λ) ≤ κ(λ) max ωλ (x) rG(λ) ν (x) , ωλ (x) Gν (x) ν , |x|≤1
ν ≥ 0,
where ωλ (x) = (1 − x2 )λ−1/2 and 1/2 4 sin2 πν/2 Γ2 (ν/2 + 1) Γ(λ + 1/2) cos2 (πν/2)Γ2 ((ν + 1)/2) √ + . κ(λ) = ν Γ2 ((ν + 1)/2 + λ) 2λ − 1 + ν(ν + 2λ) Γ2 (ν/2 + λ) π
(2.31)
(2.32)
(λ)
Proof. Thanks to (2.22b), it suffices to prove the result for rGν (x). For simplicity, we denote 2 2 −1 (2.33) G(x) := rG(λ) (1 − x2 ) M 0 (x) , ν (x); M (x) := ωλ (x)G(x); H(x) := M (x) + % where the constant % := 2λ − 1 + ν(ν + 2λ). We take three steps to complete the proof. Step 1. Show that H(x) is continuous on [−1, 1], that is, H(±1) are well defined. It is evident that by (2.22b), M (1) = 0; and from (2.25), we find that M (−1) is a finite value, when λ ≥ 1. Next, from (3.13a) with s = 1, we derive (λ−1)
(1 − x2 )1/2 M 0 (x) = (1 − 2λ) (1 − x2 )λ−1 rGν+1 (x).
(2.34)
Similarly, by (2.22b), (1 − x2 )1/2 M 0 (x)|x=1 = 0 for λ > 1, and it’s finite for λ = 1. We now justify (1 − x2 )1/2 M 0 (x)|x=−1 is also well defined. We infer from Proposition 2.2 that (a) if 1 ≤ λ < 3/2, (λ−1) (λ−1) r (λ−1) Gν+1 (x) is finite at x = −1; (b) if λ = 3/2, rGν+1 (−1) = 0; and (c) if λ > 3/2, rGν+1 (x) tends to a finite value as x → −1. Hence, by (2.33), H(±1) are well defined. Step 2. Derive the identity: H 0 (x) = −
2 4(λ − 1)x M 0 (x) , %
x ∈ (−1, 1).
(2.35)
For this purpose, taking a = −ν, b = ν + 2λ, c = λ + 1/2 and z = (1 ± x)/2 in (2.10), we find that G(x) satisfies the Sturm-Liouville problem 0 (2.36) ωλ+1 (x)G0 (x) + ν(ν + 2λ)ωλ (x)G(x) = 0. Substituting G(x) = ωλ−1 (x)M (x) into (2.36), we obtain from a direct calculation that (1 − x2 )M 00 (x) + (2λ − 3)xM 0 (x) + %M (x) = 0.
(2.37)
8
W. LIU, L. WANG & H. LI
Differentiating H(x) and using (2.37), leads to 2x 2 2 4(λ − 1)x H 0 (x) = M 0 (x) (1 − x2 )M 00 (x) + %M (x) − (M 0 (x))2 = − M 0 (x) . % % % Step 3. Prove the bounds below and calculate the values at x = 0 : 2 M 2 (x) ≤ H(x) ≤ H(0) = M 2 (0) + %−1 M 0 (0) , ∀ x ∈ [−1, 1].
(2.38)
(2.39)
0
By (2.35), we have H (x) ≡ 0, if λ = 1, so H(x) is a constant and H(x) = H(0). In other words, (2.39) is true for λ = 1. If λ > 1, we deduce from (2.35) that the stationary points of H(x) are x = 0 or zeros of M 0 (x) (if any). Let 0 6= x ˜ ∈ (−1, 1) be any zero of M 0 (x) (note: M (˜ x) 6= 0). Apparently, by (2.35), H 0 (x) does not change sign in the neighbourhood of x ˜, which means x ˜ cannot be an extreme point of H(x). In fact, x = 0 is the only extreme point in (−1, 1). We also see from (2.35) that H 0 (x) ≥ 0 (resp. H 0 (x) ≤ 0) as x → 1− (resp. x → −1+ ). Note that H(x) attains its maximum at x = 0, as H(x) is ascending when x < 0, and is descending when x > 0. Therefore, we obtain (2.39) from (2.33) and the above reasoning. Now, we calculate H(0). From (2.6) and (2.11), we obtain that for λ ≥ 0, √ π Γ(λ + 1/2) 1 1 ; = M (0) =rG(λ) (0) = F − ν, ν + 2λ; λ + 2 1 ν 2 2 Γ(−ν/2 + 1/2)Γ(ν/2 + λ + 1/2) (2.40) Γ(λ + 1/2)Γ(ν/2 + 1/2) = sin π(ν + 1)/2 √ , π Γ(ν/2 + λ + 1/2) which, together with (3.13b), implies (λ−1) (1 − x2 )1/2 M 0 (x) |x=0 = (1 − 2λ)rGν+1 (0) Γ(λ + 1/2)Γ(ν/2 + 1) Γ(λ − 1/2)Γ(ν/2 + 1) √ √ = 2 sin πν/2 . = (1 − 2λ) sin π(ν + 2)/2 π Γ(ν/2 + λ) π Γ(ν/2 + λ) In the last step, we used the identity: Γ(z + 1) = zΓ(z). Substituting (2.40)-(2.41) into (2.39), we obtain the bound in (2.31).
(2.41)
As a direct consequence of Theorem 2.1, we have the following bound for the Gegenbauer polynomials. Corollary 2.1. For real λ ≥ 1 and l ∈ N0 , we have (λ) Γ(λ + 1/2)Γ(l + 1/2) max ωλ (x) G2l (x) ≤ √ ; π Γ(l + λ + 1/2) |x|≤1
(2.42a)
(λ) 2l + 1 Γ(λ + 1/2)Γ(l + 1/2) √ max ωλ (x) G2l+1 (x) ≤ p . π Γ(l + λ + 1/2) |x|≤1 2λ − 1 + (2l + 1)(2l + 2λ + 1)
(2.42b)
Remark 2.3. The bounds for Gegenbauer polynomials multiplying by a different weight function: (1 − x2 )λ/2−1/4 can be found in e.g., [34, 30]. The bounds we derived above appear new. In late part, we shall use the bound for 0 < λ < 1, but in this case, we have to multiply the GGF-Fs by a different weight function and conduct the analysis in a slightly different manner. Theorem 2.2. For real 0 < λ < 1, we have 2 λ/2 l (λ) Gν (x) ≤ κ b(λ) max (1 − x2 )λ/2 rG(λ) ν , ν (x) , (1 − x ) |x|≤1
ν ≥ 0,
(2.43)
where κ b(λ) ν
1 4 sin2 πν/2 Γ2 (ν/2 + 1) 2 Γ(λ + 1/2) cos2 (πν/2)Γ2 (ν/2 + 1/2) √ = + . Γ2 ((ν + 1)/2 + λ) ν 2 + 2λν + λ Γ2 (ν/2 + λ) π
(2.44)
APPROXIMATION BY CHEBYSHEV EXPANSIONS
9
(λ)
Proof. Thanks to (2.22b), it suffices to prove the result for rGν (x). Similar to the proof of Theorem 2.1, we denote b c2 (x) + 1 M c0 (x) 2 , H(x) := M ρ(x) ρ(x) := (ν + λ)2 (1 − x2 ) − λ(λ − 1) (1 − x2 )−2 .
c(x) := (1 − x2 )λ/2rG(λ) (x); M ν
(2.45)
b Using Proposition 2.2, we can justify as Step 1 in the proof of Theorem 2.1 that H(x) is continuous on [−1, 1]. A direct calculation from (2.36) leads to c00 (x) − xM c0 (x) + (1 − x2 )ρ(x)M c(x) = 0, (1 − x2 )M
x ∈ (−1, 1).
(2.46)
Similar to (2.35), we obtain b 0 (x) = H
2λ(λ − 1)x c0 (x) 2 , M (λ + ν)2 (1 − x2 )2 − λ(λ − 1)(1 − x2 )
x ∈ (−1, 1).
(2.47)
b For 0 < λ < 1, H(x) is increasing for x < 0, and decreasing for x > 0, so H(x) attains its maximum at x = 0. Thus, c0 (0) 2 , ∀ x ∈ [−1, 1]. c2 (x) ≤ H(x) b b c2 (0) + ρ−1 (0) M (2.48) M ≤ H(0) =M By (2.40), + 1/2)Γ(ν/2 + 1/2) c(0) = cos πν/2 Γ(λ √ M . π Γ(ν/2 + λ + 1/2)
(2.49)
Recall the identity (cf. [37, (15.5.1)]): d ab 2 F1 (a, b; c; z) = 2 F1 (a + 1, b + 1; c + 1; z). dx c
(2.50)
From (2.20) and (2.50), we obtain d r (λ) d 1 1 − x Gν (x) = 2 F1 − ν, ν + 2λ; λ + ; dx dx 2 2 ν(ν + 2λ) 3 1 − x = . 2 F1 − ν + 1, ν + 2λ + 1; λ + ; 2λ + 1 2 2
(2.51)
If ν > 1/2, using (2.51), we have ν(ν + 2λ) r (λ+1) d r (λ) G (x) = Gν−1 (x). dx ν 2λ + 1
(2.52)
Thanks to 2 λ/2 d r (λ) c0 (x) = −λx(1 − x2 )λ/2−1 rG(λ) M G (x), ν (x) + (1 − x ) dx ν we deduce from (2.6), (2.11) and (2.51) that
Γ(ν/2 + 1) Γ(λ + 1/2) c0 (x)}|x=0 = 2 sin(πν/2) √ √ {ρ−1/2 (x)M . π ν 2 + 2λν + λ Γ(λ + ν/2) From (2.48)-(2.49) and (2.53), we derive (2.43).
(2.53)
3. Fractional integral/derivative formulas of GGF-Fs In this section, we present the Riemann-Liouville (RL) fractional integral/derivative formulas of GGF-Fs, which are essential for deriving the main result in Theorem 4.1. Prior to that, we recall the definitions of RL fractional integrals/derivatives, and introduce the related spaces of functions.
10
W. LIU, L. WANG & H. LI
3.1. Fractional integrals/derivatives and related spaces of functions. Let Ω = (a, b) ⊂ R be a finite open interval. For real p ∈ [1, ∞], let Lp (Ω) (resp. W m,p (Ω) with m ∈ N) be the usual p-Lebesgue space (resp. Sobolev space), equipped with the norm k · kLp (Ω) (resp. k · kW m,p (Ω) ) as ¯ be the classical space of continuous functions on [a, b]. Denote by AC(Ω) in Adams [2]. Let C(Ω) the space of absolutely continuous functions on [a, b]. Recall that (cf. [39, 31]): f ∈ AC(Ω) if and only if f ∈ L1 (Ω), f has a derivative f 0 (x) almost everywhere on [a, b] and in L1 (Ω), and f has the integral representation: Z x
f 0 (t) dt,
f (x) = f (a) +
∀x ∈ [a, b].
(3.1)
a
Note that we have AC(Ω) = W 1,1 (Ω) (cf. [31, Sec. 7.2]). Let BV(Ω) be the space of functions of bounded variation on Ω. It is known that every function of bounded variation has at most a countable number of discontinuities, which are either jump or removable discontinuities (see, e.g., [50, Thm 2.8]). Therefore, the functions in BV(Ω) are differentiable almost everywhere. In summary, we have ¯ ( W 1,1 (Ω) ≡ AC(Ω) ( BV(Ω) ( L1 (Ω). C(Ω)
(3.2)
In what follows, we denote the ordinary derivatives by D = d/dx and Dk = dk /dxk with integer k ≥ 2. Recall the definitions of the RL fractional integrals and derivatives (cf. [39, P. 33, P. 44]). Definition 3.1. For any u ∈ L1 (Ω), the left-sided and right-sided RL fractional integrals of order s ∈ R+ are defined by Z x Z b u(y) 1 u(y) 1 s s (3.3) dy; x Ib u(x) = dy, x ∈ Ω. a Ix u(x) = Γ(s) a (x − y)1−s Γ(s) x (y − x)1−s A function u ∈ L1 (Ω) is said to possess a left-sided (resp. right-sided ) RL fractional derivative 1−s R s R s u ∈ AC(Ω) (resp. x Ib1−s u ∈ AC(Ω)). Moreover, we a Dx u (resp. x Db u) of order s ∈ (0, 1), if a Ix have 1−s R s R s u , x Db u = −D x Ib1−s u , x ∈ Ω. (3.4) a Dx u = D a Ix Similarly, for s ∈ [k − 1, k) with k ∈ N, the higher order left-sided and right-sided RL fractional derivatives for u ∈ L1 (Ω) satisfying a Ix1−s u, x Ib1−s u ∈ ACk (Ω) (i.e., the space of all f (x) having continuous derivatives up to order k − 1 on Ω and f (k−1) ∈ AC(Ω)) are defined by k−s R s R s k u ; x Db u(x) = (−1)k Dk x Ibk−s u . (3.5) a Ix a Dx u = D As a generalisation of (3.1), we have the following fractional integral representation, which can also be regarded as a definition of RL fractional derivatives alternative to Definition 3.1 (see [12, Prop. 3] and [39, P. 45]). Proposition 3.1. A function u ∈ L1 (Ω) possesses a left-sided RL fractional derivative order s ∈ (0, 1), if and only if there exist Ca ∈ R and φ ∈ L1 (Ω) such that u(x) =
Ca (x − a)s−1 + a Ixs φ(x) a.e. on [a, b], Γ(s)
s where Ca = (a Ix1−s u)(a) and φ(x) = R a Dx u(x) a.e. on [a, b]. 1 Similarly, a function u ∈ L (Ω) has a right-sided RL fractional derivative (0, 1), if and only if there exist Cb ∈ R and ψ ∈ L1 (Ω) such that
u(x) =
R s a Dx u
of
(3.6)
R s x Db u
of order s ∈
Cb (b − x)s−1 + x Ibs ψ(x) a.e. on [a, b], Γ(s)
(3.7)
s where Cb = (x Ib1−s u)(b) and ψ(x) = R x Db u(x) a.e. on [a, b].
Remark 3.1. We infer from Proposition 3.1 the equivalence of the two fractional spaces: s,1 s 1 WRL,a+ (Ω) := u ∈ L1 (Ω) : a Ix1−s u ∈ AC(Ω) ≡ u ∈ L1 (Ω) : R a Dx u ∈ L (Ω) ,
(3.8)
APPROXIMATION BY CHEBYSHEV EXPANSIONS
11
for s ∈ (0, 1). The inclusion “ ⊆ ” follows immediately from u ∈ L1 (Ω), a Ix1−s u ∈ AC(Ω) and Definition 3.1. To show the opposite inclusion “ ⊇ ”, we find Z b Z b Z x Z bZ x 1 1 |a Ix1−s u|dx = (x − y)−s u(y)dy dx ≤ (x − y)−s |u(y)|dydx Γ(1 − s) a Γ(1 − s) a a a a Z bZ b Z b 1 1 = (x − y)−s dx |u(y)|dy = (b − y)1−s |u(y)|dy Γ(1 − s) a Γ(2 − s) y a Z (b − a)1−s b ≤ |u(y)|dy, Γ(2 − s) a s 1−s Since u ∈ L1 (Ω), we conclude a Ix1−s u ∈ L1 (Ω). As R u} ∈ L1 (Ω), we infer that a Dx u = D{a Ix 1−s 1,1 u ∈ W (Ω)(≡ AC(Ω)). Therefore, the equivalence in (3.8) follows. a Ix s,1 s 1−s s The same property for WRL,b− (Ω) with x Ib1−s u and R u and R x Db u in place of a Ix a Dx u, respectively, holds.
Remark 3.2. Recall the explicit formulas (cf. [38, 19]): for real η > −1 and s > 0, s a Ix
(x − a)η =
Γ(η + 1) (x − a)η+s ; Γ(η + s + 1)
R s a Dx
(x − a)η =
Γ(η + 1) (x − a)η−s . Γ(η − s + 1)
(3.9)
Similar formulas are available for right-sided RL fractional integral/derivative of (b − x)η . Observe that for s ∈ (0, 1), 1−s a Ix
(x − a)s−1 = Γ(s);
1−s x Ib
(b − x)s−1 = Γ(s),
(3.10)
which implies the boundary values Ca , Cb in Proposition 3.1 are not always zero as x → a /b− . On the other hand, if η − s + 1 = −n with n ∈ N0 in the second formula of (3.9) (note: Γ(−n) = ∞), then we have +
R s a Dx
s s−n−1 (x − a)s−n−1 = R = 0, x Db (b − x)
for s > n ∈ N0 .
(3.11)
In view of this, the first term in the integral representations of u in (3.6)-(3.7) actually plays the same role as a “constant” in (3.1). 3.2. Important formulas. Theorem 3.1. For real ν ≥ s > 0 and real λ > −1/2, the GGF-Fs satisfy the RL fractional integral formulas: for x ∈ (−1, 1), (−s) (λ+s) s r (λ) (3.12a) ωλ+s (x) rGν−s (x), x I1 ωλ (x) Gν (x) =hλ s −1 Ix
(λ+s) [ν]+[ν−s] (−s) ωλ (x) l G(λ) hλ ωλ+s (x) l Gν−s (x). ν (x) = (−1)
(3.12b)
For real λ > s − 1/2 and real ν ≥ 0, the GGF-Fs satisfy the RL fractional derivative formulas: for x ∈ (−1, 1), (s) R s r (λ) r (λ−s) (3.13a) x D1 ωλ (x) Gν (x) =hλ ωλ−s (x) Gν+s (x), R s −1 Dx
(λ−s) [ν]+[ν+s] (s) ωλ (x) l G(λ) hλ ωλ−s (x) l Gν+s (x). ν (x) = (−1)
(3.13b)
Here, 1
ωα (x) = (1 − x2 )α− 2 ,
(β)
hλ =
2β Γ(λ + 1/2) . Γ(λ − β + 1/2)
(3.14)
Proof. Recall the Bateman’s fractional integral formula (cf. [5, P. 313]): for c, s > 0 and |z| < 1, Z z 1−(c+s) Γ(c + s) F (a, b; c + s; z) = z tc−1 (z − t)s−1 2 F1 (a, b; c; t) dt, (3.15) 2 1 Γ(c)Γ(s) 0
12
W. LIU, L. WANG & H. LI
which, together with (2.9), yields z c+s−1 (1 − z)c+s−a−b 2 F1 (c − a + s, c − b + s; c + s; z) Z Γ(c + s) z c−1 = t (z − t)s−1 (1 − t)c−a−b 2 F1 (c − a, c − b; c; t) dt. Γ(c)Γ(s) 0 Applying the variable substitutions: z = (1 − x)/2 and t = (1 − y)/2 to (3.16), leads to 1 − x (1 − x)c+s−1 (1 + x)c+s−a−b 2 F1 c − a + s, c − b + s; c + s; 2 Z 2s Γ(c + s) 1 1 − y c−1 c−a−b s−1 = (1 − y) (1 + y) (y − x) 2 F1 c − a, c − b; c; dy. Γ(c)Γ(s) x 2
(3.16)
(3.17)
Taking a = ν + λ + 1/2, b = −ν − λ + 1/2 and c = λ + 1/2 in (3.17), we obtain 1 1 − x (1 − x2 )λ+s+1/2 2 F1 s − ν, ν + s + 2λ; λ + s + ; 2 2 Z 1 s 2 Γ(λ + s + 1/2) 1 1 − y = (1 − y 2 )λ−1/2 (y − x)s−1 2 F1 − ν, ν + 2λ; λ + ; dy. Γ(λ + 1/2)Γ(s) x 2 2 From (3.3) and (2.20), we derive (3.12a) immediately. Similarly, performing the variable substitutions: z = (1 + x)/2 and t = (1 + y)/2 to (3.16), we can obtain (3.12b) in the same manner. s s R s Applying R x D1 to both sides of (3.12a) and noting that x D1 x I1 is an identity operator (cf. [19, Thm 2.14]), we obtain for real ν ≥ s > 0 and real λ > −1/2, (−s) R s r (λ+s) (3.18) ωλ (x) rG(λ) ν (x) =hλ x D1 ωλ+s (x) Gν−s (x) . Replacing λ, ν in the above equation by λ − s, ν + s, and noting that (−s) −1
hλ−s we obtain (3.13a). Similarly, applying
R s −1 Dx
=
2s Γ(λ + 1/2) (s) = hλ , Γ(λ − s + 1/2)
to both sides of (3.12b), we can derive (3.13b).
(3.19)
Remark 3.3. The formulas in Theorem 3.1 link up GGF-Fs with different orders and parameters, which can be very useful for developing efficient spectral algorithms for fractional differential equations. Indeed, corresponding to the special cases in Proposition 2.3 and Remark 2.2, we can (λ ,−λ ) derive from (2.27a) and (3.12a) that the fractional operator x I1s takes {(1 − x)λ∗ Pn ∗ ∗ (x)} to (λ +s,s−λ∗ ) {(1 − x)λ∗ +s Pn ∗ (x)}. Indeed, such types of explicit formulas played an essential role in the construction of spectral methods in e.g., [54, 18]. 4. Estimates on Chebyshev expansions in fractional Sobolev-type spaces In this section, we introduce the theoretical framework and present the main results. Here, we focus on the approximation of functions with interior singularities, and shall extend the estimates to deal with functions with endpoint singularities in Section 6. 4.1. Fractional Sobolev-type spaces. For a fixed θ ∈ Ω := (−1, 1), we denote Ω− θ := (−1, θ) and Ω+ := (θ, 1). For m ∈ N and s ∈ (0, 1), we define the fractional Sobolev-type space: 0 θ Wθm+s (Ω) := u ∈ L1 (Ω) : u, u0 , · · · , u(m−1) ∈ AC(Ω) and (4.1) 1−s (m) 1−s (m) u ∈ BV(Ω− u ∈ BV(Ω+ x Iθ θ Ix θ ), θ ) ,
APPROXIMATION BY CHEBYSHEV EXPANSIONS
13
equipped with the norm (note: AC(Ω) = W 1,1 (Ω)): kukWm+s (Ω) =
m X
θ
ku(k) kL1 (Ω) + Uθm,s ,
(4.2)
k=0
where the semi-norm is defined by • for m = 1, 2, · · · , Z θ Z 1 R s (m) s (m) x Dθ u (x) dx + Uθm,s := |R (x)|dx θ Dx u −1 θ + x Iθ1−s u(m) (θ+) + θ Ix1−s u(m) (θ−) ;
(4.3)
• for m = 0 and s ∈ (1/2, 1), Z θ Z 1 R s 0,s s Uθ := |R θ Dx u(x)| ωs/2 (x)dx x Dθ u(x) ωs/2 (x)dx + −1 θ + ωs/2 x I 1−s u (θ+) + ωs/2 θ Ix1−s u (θ−) .
(4.4)
θ
s (m) R s (m) We remark that (i) from Proposition 3.1 and Remark 3.1, R , x Dθ u are well-defined and θ Dx u 1 belong to L (Ω); and (ii) the parameter θ is related to the location of the singular point of u(x) (see Section 4.4). It is noteworthy that Wm+s (Ω) with s ∈ (0, 1) and θ ∈ [−1, 1], is an intermediate θ fractional space:
W 1,1 (Ω) ( Wsθ (Ω) ( L1 (Ω);
Wm+1 (Ω) ( Wm+s (Ω) ( Wm (Ω), m ≥ 1. θ
(4.5)
−
When s → 1 , the fractional integrals/derivatives are identical to the ordinary ones, so the space Wm+s (Ω) reduces to θ (4.6) Wm+1 (Ω) := u ∈ L1 (Ω) : u0 , · · · , u(m−1) ∈ AC(Ω), u(m) ∈ BV(Ω) . When θ → ±1, we denote the resulted fractional spaces by m+s (Ω) := u ∈ L1 (Ω) : u, u0 , · · · , u(m−1) ∈ AC(Ω), x I11−s u(m) ∈ BV(Ω) , W+
(4.7)
m+s (Ω) is defined similarly with −1 Ix1−s u(m) in place of x I11−s u(m) . Accordingly, the semiand W− m,s m,s norm U+ (resp. U− ) only involves the right (resp. left) RL fractional integrals/derivatives.
4.2. Identity and decay rate of Chebyshev expansion coefficients. Let ω(x) = (1−x2 )−1/2 = ω0 (x) be the Chebyshev weight function. For any u ∈ L2ω (Ω), we expand it in Chebyshev series and denote the partial sum (for N ∈ N) by u(x) =
∞ X
0
n=0
u ˆC n Tn (x),
C πN u(x) =
N X
0
u ˆC n Tn (x),
(4.8)
n=0
where the prime denotes a sum whose first term is halved, and Z Z 2 1 Tn (x) 2 π C u ˆn = u(x) √ u(cos θ) cos(nθ)dθ. dx = π −1 π 0 1 − x2
(4.9)
Recall the formula of integration by parts involving the Stieltjes integrals (cf. [29, (1.20)]). Lemma 4.1. For any u, v ∈ BV(Ω), we have Z b Z b− u(x−) dv(x) = {u(x)v(x)} a+ − a
b
v(x−) du(x),
(4.10)
a
where the notation u(x±) stands for the right- and left-limit of u at x, respectively. Here, u(x−), v(x−) can also be replaced by u(x+), v(x+).
14
W. LIU, L. WANG & H. LI
In particular, if u, v ∈ AC(Ω), we have Z Z b u(x)v 0 (x) dx +
b
a
a
b u0 (x)v(x) dx = {u(x)v(x)} a .
(4.11)
As highlighted in [32], the error analysis of Chebyshev orthogonal projections, and the related interpolation and quadrature errors essentially depends on estimating the decay rate of |ˆ uC n |. We present the main result on this. Theorem 4.1. Given θ ∈ (−1, 1), if u ∈ Wm+s (Ω) with s ∈ (0, 1) and integer m ≥ 0, then for θ n ≥ m + s > 1/2, nZ θ 1 (m+s) R s (m) √ u ˆC = − (x) l Gn−m−s (x) ωm+s (x) dx n x Dθ u π 2m+s−1 Γ(m + s + 1/2) −1 (m+s) + x Iθ1−s u(m) (x) l Gn−m−s (x) ωm+s (x) x=θ− (4.12) Z 1 R s (m) r (m+s) (x) ω (x) dx − D u (x) G m+s θ x n−m−s θ o 1−s (m) (m+s) − θ Ix u (x) rGn−m−s (x) ωm+s (x) x=θ+ , where ωλ (x) = (1 − x2 )λ−1/2 . Then for each n ≥ m + s, we have the following upper bounds. (i) If m = 0 and s ∈ (1/2, 1), then we have Uθ0,s Γ((n − s + 1)/2) 2 Γ((n − s)/2 + 1) C (4.13) |ˆ un | ≤ s−1 max ,√ . 2 π Γ((n + s + 1)/2) n2 − s2 + s Γ((n + s)/2) (ii) If m ≥ 1, then we have |ˆ uC n| ≤
Uθm,s
Γ((n − m − s + 1)/2) . 2m+s−1 π Γ((n + m + s + 1)/2)
Proof. Substituting n → n − k, λ → k in (2.19), leads to 0 1 (k+1) (k) ωk+1 (x)Gn−k−1 (x) , ωk (x)Gn−k (x) = − 2k + 1
(4.14)
n ≥ k + 1.
(4.15)
For u, u0 , · · · , u(m−1) ∈ AC(Ω), using (4.15) with k = 0, 1, · · · , m − 1, and the integration by parts in Lemma 4.1, we obtain that for n ≥ m, Z Z 0 (1) 2 1 2 1 (0) u ˆC = u(x)G (x)ω (x) dx = − u(x) Gn−1 (x)ω1 (x) dx 0 n n π −1 π −1 Z 1 Z 1 (2) 0 2 2 (1) = u0 (x)Gn−1 (x)ω1 (x)dx = − u0 (x) Gn−2 (x)ω2 (x) dx π −1 3π −1 (4.16) Z 1 Z 0 12 1 2 1 00 (3) (2) 00 = u (x)Gn−2 (x)ω2 (x)dx = − u (x) Gn−3 (x)ω3 (x) dx 3 π −1 3 · 5 π −1 Z 1 2 1 (m) (m) = ··· = u (x) Gn−m (x)ωm (x) dx. (2m − 1)!! π −1 Using the identity (cf. [1]): √ Γ(k + 1/2) =
π (2k − 1)!! , 2k
k ∈ N0 ,
(4.17)
(m)
(4.18)
we can rewrite the expansion coefficient as u ˆC n = √
1 m−1 π2 Γ(m + 1/2)
Z
1
−1
u(m) (x) Gn−m (x)ωm (x) dx.
APPROXIMATION BY CHEBYSHEV EXPANSIONS
15
To proceed with the proof, it is necessary to use the identities: for m + s > 1/2, and n ≥ m + s, 0 Γ(m + 1/2) (m+s) 1−s ωm+s (x) rGn−m−s (x) x I1 Γ(m + s + 1/2) 0 Γ(m + 1/2) (m+s) 1−s ωm+s (x) l Gn−m−s (x) . =− s −1 Ix 2 Γ(m + s + 1/2)
(m)
ωm (x)Gn−m (x) = −
2s
(4.19)
To derive them, we substitute s, λ, ν in (3.12a)-(3.12b) by 1−s, m−s, n−m+s, respectively, leading to 21−s Γ(m + 1/2) 1−s (m+s−1) ωm+s−1 (x) rGn−m−s+1 (x) x I1 Γ(m + s − 1/2) 21−s Γ(m + 1/2) (m+s−1) 1−s ωm+s−1 (x) l Gn−m−s+1 (x) . = −1 Ix Γ(m + s − 1/2)
(m)
ωm (x)Gn−m (x) =
(4.20)
On the other hand, taking s = 1, λ = m + s and ν = n − m − s in (3.13a)-(3.13b), we obtain that for m + s > 1/2, 0 Γ(m + s − 1/2) (m+s) ωm+s (x) rGn−m−s (x) , 2 Γ(m + s + 1/2) 0 Γ(m + s − 1/2) (m+s−1) (m+s) ωm+s−1 (x) l Gn−m−s−1 (x) = − ωm+s (x) l Gn−m−s (x) . 2 Γ(m + s + 1/2) (m+s−1)
ωm+s−1 (x) rGn−m−s+1 (x) = −
(4.21)
Substituting (4.21) into (4.20) leads to (4.19). For notational convenience, we denote f (x) = u(m) (x),
(m+s)
(m+s)
h(x) = ωm+s (x) rGn−m−s (x).
g(x) = ωm+s (x) l Gn−m−s (x),
(4.22)
By (4.19), we can rewrite (4.18) as u ˆC n = √
1 π 2m−1 Γ(m + 1/2)
Z
θ
(m)
u(m) Gn−m ωm dx +
−1
1 = √ m+s−1 π2 Γ(m + s + 1/2)
Z
1
(m) u(m) Gn−m ωm dx
θ
Z
θ
f (x) −1 Ix1−s g 0 (x) dx
−1
Z +
1
f (x) x I11−s h0 (x) dx
(4.23)
.
θ
We find from (2.22b) and (4.21), g 0 (x) (resp. h0 (x)) is continuous on (−1, θ] (resp. [θ, 1)), and they are also integrable when m + s > 1/2. Thus, for f ∈ L1 (Ω), changing the order of integration by the Fubini’s Theorem, we derive from (3.3) that Z
Z θ Z x g 0 (y) 1 = dy f (x) dx s Γ(1 − s) −1 −1 (x − y) Z θ Z θ f (x) 1 f (y) 0 dx g (y) dy = dy g 0 (y) dx s (x − y)s Γ(1 − s) −1 x (y − x)
θ
f (x)−1 Ix1−s g 0 (x) dx
−1
Z θ Z θ 1 Γ(1 − s) −1 y Z θ g 0 (x) x Iθ1−s f (x) dx. =
=
(4.24)
−1
Similarly, we can show that Z θ
1
f (x) x I11−s h0 (x) dx =
Z θ
1
h0 (x) θ Ix1−s f (x) dx.
(4.25)
16
W. LIU, L. WANG & H. LI
1−s Thus, if x Iθ1−s f (x) ∈ BV(Ω− f (x) ∈ BV(Ω+ θ ) and θ Ix θ ), we use integration by parts in Lemma 4.1, and derive Z θ Z θ f (x) −1 Ix1−s g 0 (x) dx = g 0 (x) x Iθ1−s f (x) dx −1
−1
θ− = g(x) x Iθ1−s f (x) −1+ −
Z
= g(x) x Iθ1−s f (x) x=θ− +
θ
g(x) (x Iθ1−s f (x))0 dx
−1 Z θ
(4.26)
s g(x) R x Dθ f (x) dx,
−1
where we used the fact g(−1) = 0 for m + s > 1/2 due to (2.22b), and also used (3.5). Similarly, we can show that for m + s > 1/2, Z 1 Z 1 1−s 0 s 1−s h(x) R f (x) x I1 h (x) dx = − h(x) θ Ix f (x) x=θ+ − θ Dx f (x) dx.
(4.27)
θ
θ
Substituting (4.22) and (4.26)-(4.27) into (4.23), we obtain (4.12). We next show the bounds in (4.13)-(4.14). (i) For m = 0 and s ∈ (1/2, 1), we take λ = s and ν = n − s in Theorem 2.2, and then obtain from Theorem 4.1 and the bound (4.13) directly. (ii) We now turn to the case with m ≥ 1. We first prove the inequality: Γ(ν/2 + 1) Γ((ν + 1)/2) ≤ , Γ((ν + 1)/2 + λ) 2λ − 1 + ν(ν + 2λ) Γ(ν/2 + λ) 2
p
ν ≥ 0, λ ≥ 1.
(4.28)
To prove (4.28), we resort to [15, Corollary 2], where it was shown that 1 Γ(z + 1) f (z) := √ , z Γ(z + 1/2)
z > 0,
is decreasing. Then using the facts: (ν − 1)/2 + λ > 0, (ν − 1)/2 + λ > (ν + 1)/2, we can derive 1
Γ((ν + 1)/2 + λ) Γ((ν + 3)/2) 1 p ≤p = (ν − 1)/2 + λ Γ(ν/2 + λ) ν/2 + 1/2 Γ(ν/2 + 1)
r
ν + 1 Γ((ν + 1)/2) , 2 Γ(ν/2 + 1)
(4.29)
where in the last step, we used the identity: Γ(z + 1) = zΓ(z). Next, we rewrite (4.29) as Γ((ν + 1)/2) Γ(ν/2 + 1) ≤ . Γ((ν + 1)/2 + λ) (ν + 2λ − 1)(ν + 1) Γ(ν/2 + λ) 2
p
(4.30)
Noting that 2 p
2λ − 1 + ν(ν + 2λ)
=p
2 (ν + 2λ − 1)(ν + 1)
,
so we can obtain (4.28) from (4.30) immediately. Thanks to (4.28), we can derive from Theorem 2.1 that l (λ) Γ(λ + 1/2) Γ((ν + 1)/2) ≤ √ max ωλ (x) rG(λ) , ν (x) , ωλ (x) Gν (x) Γ((ν + 1)/2 + λ) π |x|≤1
(4.31)
so the bound in (4.14) follows from Theorem 4.1 with λ = m + s and ν = n − m − s in (4.31).
APPROXIMATION BY CHEBYSHEV EXPANSIONS
17
4.3. L∞ - and L2 -estimates of Chebyshev expansions. With Theorem 4.1 at our disposal, we can analysis all related orthogonal projections, interpolations and quadratures (cf. [32]). Here, we just estimate the Chebyshev expansion errors in L∞ -norm and L2ω -norm. Theorem 4.2. Given θ ∈ (−1, 1), if u ∈ Wm+s (Ω) with s ∈ (0, 1) and integer m ≥ 0, we have the θ following estimates. (i) For 1 ≤ m + s ≤ N + 1, C ku − πN ukL∞ (Ω) ≤
Uθm,s Γ((N − m − s)/2 + 1) . 2m+s−2 (m + s − 1)π Γ((N + m + s)/2)
(4.32)
(ii) For 1/2 < m + s < N + 1, ku −
C πN ukL2ω (Ω)
≤
23 Γ(N − m − s + 1) (2m + 2s − 1)π Γ(N + m + s)
1/2
Uθm,s .
(4.33)
σ := m + s.
(4.34)
Proof. We first prove (4.32). For simplicity, we denote Snσ :=
Γ((n − σ + 1)/2) , Γ((n + σ + 1)/2)
Tnσ :=
Γ((n − σ + 1)/2) , Γ((n + σ − 1)/2)
A direct calculation leads to the identity: n + σ − 1 Γ((n − σ + 1)/2) n − σ + 1 Γ((n − σ + 1)/2) − 2 Γ((n + σ + 1)/2) 2 Γ((n + σ + 1)/2) Γ((n − σ + 1)/2) =(σ − 1) = (σ − 1)Snσ , Γ((n + σ + 1)/2)
σ Tnσ − Tn+2 =
(4.35)
where we used the identity zΓ(z) = Γ(z + 1). By (4.14), ∞ ∞ ∞ X X σ Uθm,s X Uθm,s C σ σ u(x) − πN u(x) ≤ |ˆ uC | ≤ Tn − Tn+2 S = n n σ−1 σ−1 2 π 2 (σ − 1)π n=N +1 n=N +1 n=N +1 ∞ ∞ m,s X X Uθ σ σ σ σ = σ−1 T + TN +2 + Tn − Tn+2 2 (σ − 1)π N +1 n=N +3
=
(4.36)
n=N +1
σ Uθm,s TN +1 + TNσ+2 . σ−1 2 (σ − 1)π
We find from [3, (1.1) and Theorem 10] that for 0 ≤ a ≤ b, the ratio Rba (z) :=
Γ(z + a) , Γ(z + b)
z ≥ 0,
(4.37)
is decreasing with respect to z. As σ − 1 ≥ 0, we have 0 0 TNσ+2 = Rσ−1 (1 + (N − σ + 1)/2) ≤ Rσ−1 (1 + (N − σ)/2) = TNσ+1 .
(4.38)
Therefore, the estimate (4.32) follows from (4.36) and (4.38). We now turn to the estimate (4.33) with 1 ≤ m + s ≤ N + 1. Similar to (4.38), we can use (4.37) σ to show that Snσ ≤ Sn−1 . Thus, using the identity Γ(2z) = π −1/2 22z−1 Γ(z)Γ(z + 1/2),
(4.39)
Γ((n − σ + 1)/2) Γ((n − σ)/2) Γ(n − σ) σ (Snσ )2 ≤ Snσ Sn−1 = = 22σ Γ((n + σ + 1)/2) Γ((n + σ)/2) Γ(n + σ) 2σ 2 Γ(n − σ) Γ(n + 1 − σ) = − . 2σ − 1 Γ(n − 1 + σ) Γ(n + σ)
(4.40)
we derive
18
W. LIU, L. WANG & H. LI
Then, for σ ≥ 1,
π C 2
u − πN u L2 (Ω) = ω 2
∞ X C 2 (Uθm,s )2 u ˆn ≤ 2σ−3 2 π
n=N +1
∞ X
(Snσ )2 ≤
n=N +1
23 (Uθm,s )2 Γ(N − σ + 1) . (2σ − 1)π Γ(N + σ)
(4.41)
Finally, we prove (4.33) with m = 0 and s ∈ (1/2, 1) by using (4.13). Note that (4.41) is valid for m = 0, so we have Γ2 ((n − s)/2 + 1) 22s Γ(n − s) Γ(n + 1 − s) s 2 (4.42) (Sn ) = ≤ − . Γ2 ((n + s)/2) 2s − 1 Γ(n − 1 + s) Γ(n + s) For the second factor in the upper bound (4.13), we also use (4.37) and (4.39)-(4.40) to show n2
Γ2 ((n − s)/2 + 1) 4 Γ((n − s)/2 + 1) Γ((n − s)/2 + 1/2) 4 ≤ 2 2 − s + s Γ2 ((n + s)/2) n − s2 + s Γ((n + s)/2) Γ((n + s)/2 − 1/2) 22s Γ(n − s + 1) n2 − s2 + s − n Γ(n − s) = 2 = 22s 2 n − s + s Γ(n + s − 1) n2 − s2 + s Γ(n + s) 22s Γ(n − s) Γ(n + 1 − s) Γ(n − s) ≤ 22s = − . Γ(n + s) 2s − 1 Γ(n − 1 + s) Γ(n + s)
(4.43)
Thus, from (4.13), we obtain 2 |ˆ uC n|
4(Uθ0,s )2 ≤ (2s − 1)π 2
Γ(n + 1 − s) Γ(n − s) − . Γ(n − 1 + s) Γ(n + s)
(4.44)
With this, we derive
π C 2
u − πN u L2 (Ω) = ω 2
∞ X C 2 23 (Uθ0,s )2 Γ(N − s + 1) u ˆn ≤ . (2s − 1)π Γ(N + s)
(4.45)
n=N +1
This completes the proof.
Remark 4.1. Using the property of Gamma function (see [1, (6.1.38)]): √ θ , ∀ x > 0, 0 < θ < 1, Γ(x + 1) = 2π xx+1/2 exp − x + 12x
(4.46)
we find that under the conditions of the above theorems and for fixed m and large n or N, −(m+s) m,s |ˆ uC Uθ ; n | ≤ Cn C ku − πN ukL∞ (Ω) ≤ CN 1−(m+s) Uθm,s ;
ku −
C πN ukL2ω (Ω)
≤ CN
1 2 −(m+s)
(4.47)
Uθm,s ,
where C is a positive constant independent of n, N and u.
4.4. Illustrative examples. In what follows, we apply the previous results to two typical types of singular functions, and illustrate that the expected convergence rate can be achieved. • Type-I: Consider u(x) = |x − θ|α ,
α > −1/2,
θ ∈ (−1, 1),
(4.48)
where α is not an even integer. • Type-II: Consider u(x) = |x − θ|α ln |x − θ|,
α > −1/2,
θ ∈ (−1, 1).
(4.49)
APPROXIMATION BY CHEBYSHEV EXPANSIONS
19
4.4.1. Type-I: u(x) = |x − θ|α . Proposition 4.1. Given the function in (4.48), we have that (i) if α is an odd integer, then u ∈ Wα+1 (Ω) (cf. (4.6)); and (ii) if α is not an integer, then u ∈ Wα+1 (Ω) (cf. (4.1)). Then its θ Chebyshev expansion coefficients can be expressed by Γ(α + 1) [α] l (α+1) √ rG(α+1) Gn−α−1 (θ) ωα+1 (θ), u ˆC (4.50) n = α n−α−1 (θ) − (−1) 2 Γ(α + 3/2) π for all n ≥ α + 1. Moreover, we have (a) for −1/2 < α < 0, Γ(α + 1) Γ((n − α)/2) 2 Γ((n − α + 1)/2) 2 α/2 p |ˆ uC | ≤ (1 − θ ) max , , n 2α−1 π Γ((n + α)/2 + 1) n2 − α(α + 1) Γ((n + α + 1)/2)
(4.51)
(b) for α ≥ 0, |ˆ uC n| ≤
Γ(α + 1) Γ((n − α)/2) . 2α−1 π Γ((n + α)/2 + 1)
(4.52)
Proof. (i) If α is an odd integer, we find u(k) = dkα |x − θ|α−k (sgn(x − θ))k ∈ AC(Ω), (α)
u
=
dα α
(2H(x − θ) − 1) ∈ BV(Ω),
0 ≤ k ≤ α − 1,
(α+1)
u
1 = 2dα α δ(x − θ) ∈ L (Ω),
(4.53)
where sign(z), H(z), δ(z) are the sign, Heaviside and Dirac Delta functions, respectively, and dkα := α(α − 1) · · · (α − k + 1) =
Γ(α + 1) . Γ(α − k + 1)
(4.54)
Thus, from the definition (4.6), we infer that u ∈ Wα+1 (Ω). Moreover, we obtain from (5.10) (with m = α), (4.17) and (4.53) that Z 1 2 1 (α+1) Γ(α + 1) (α+1) C √ G(α+1) (θ)ωα+1 (θ), u ˆn = u (x) Gn−α−1 (x)ωα+1 (x) dx = α−1 (2α + 1)!! π −1 2 Γ(α + 3/2) π n−α−1 which is identical to (4.50) with α being an odd integer, thanks to (2.22a). (ii) If α is not an integer, let m = [α] + 1 and s = {α + 1} ∈ (0, 1). Like (4.53), we have u, · · · , u(m−1) ∈ AC(Ω). By a direct calculation, we infer from (3.9) that for x ∈ (−1, θ), 1−s (m) u x Iθ
m−α [α]+1 = (−1)m dm (θ − x)α−m = (−1)m dm Γ(α + 1), α x Iθ α Γ(s) = (−1)
(4.55)
while for x ∈ (θ, 1), 1−s (m) u θ Ix
m−α = dm (x − θ)α−m = dm α θ Ix α Γ(s) = Γ(α + 1).
(4.56)
Therefore, by the definition (4.1), we have u ∈ Wα+1 (Ω). θ s (m) s (m) (x) = 0, so we can derive the exact It is clear that by (4.55)-(4.56), R (x) = R x Dθ u θ Dx u formula (4.50) by using (4.12) straightforwardly. (a) For −1/2 < α < 0, taking s = α + 1 in (4.13), we have Γ(α + 1) Γ((n − α)/2) 2 Γ((n − α + 1)/2) 2 α/2 C |ˆ un | ≤ α−1 (1 − θ ) max ,p , 2 π Γ((n + α)/2 + 1) n2 − α(α + 1) Γ((n + α + 1)/2) where we used the fact U0,α+1 = 2(1 − θ2 )α/2 Γ(α + 1). (b) Similarly, we obtain from (4.14) that for α ≥ 0, |ˆ uC n| ≤ This completes the proof.
Γ(α + 1) Γ((n − α)/2) . 2α−1 π Γ((n + α)/2 + 1)
20
W. LIU, L. WANG & H. LI
Remark 4.2. As a special case of (4.50), we obtain from (2.22b) and (2.40) that the Chebyshev expansion coefficients of |x|α have the exact representation for each integer n ≥ 0, (n − α)π Γ(α + 1)Γ((n − α)/2) n−1 u ˆC = (−1) − 1 , sin (4.57) n 2α πΓ((n + α)/2 + 1) 2 which implies that for integer k ≥ 0, u ˆC 2k+1 = 0,
k u ˆC 2k = (−1) sin
απ Γ(α + 1) Γ(k − α/2) . 2 2α−1 π Γ(k + α/2 + 1)
(4.58)
It is noteworthy that the following asymptotic estimate for large k was obtained in [36, Sec. 3.11]: απ Γ(α + 1) −α−1 k , 2 2α−1 π but by different means. Indeed, our approach leads to exact representations for all modes. k u ˆC 2k ' (−1) sin
(4.59)
Note that we can directly apply Theorem 4.2 to bound the errors of the Chebyshev expansion of the above type of singular functions. 4.4.2. Type-II: u(x) = |x − θ|α ln |x − θ|. We first derive a useful formula which, to the best of our knowledge, is not available in literature. Lemma 4.2. For real η > −1 and s ≥ 0, s a Ix {(x
− a)η ln(x − a)} =
Γ(η + 1) ln(x − a) + ψ(η + 1) − ψ(η + s + 1) (x − a)η+s , (4.60) Γ(η + s + 1)
for x > a, and the same formula holds for x Ibs {(b − x)η ln(b − x)} (for x < b) with b − x in place of x − a. Here, Γ0 (z) 1 1 < ψ(z) = < ln z − , z > 0. (4.61) ln z − 2z Γ(z) z Proof. Recall that (cf. [22, P. 53]) zε − 1 , ε→0 ε
ln z = lim
z > 0.
(4.62)
Thus from (3.9) and (4.62), we obtain for η > −1 and s ≥ 0, n η+ε − (x − a)η o s η s (x − a) a Ix {(x − a) ln(x − a)} = lim a Ix ε→0 ε o 1 n Γ(η + ε + 1) Γ(η + 1) = lim (x − a)η+ε+s − (x − a)η+s ε→0 ε Γ(η + ε + s + 1) Γ(η + s + 1) n Γ(η + ε + 1) (x − a)ε − 1 o =(x − a)η+s lim ε→0 Γ(η + ε + s + 1) ε n Γ(η + ε + 1) 1 Γ(η + 1) o + (x − a)η+s lim − ε→0 ε Γ(η + ε + s + 1) Γ(η + s + 1) Γ(η + 1) = ln(x − a) + ψ(η + 1) − ψ(η + s + 1) (x − a)η+s , Γ(η + s + 1) where in the last step, we used the L’Hospital’s rule. Thus, the formula (4.60) follows. The property of the ψ-function can be found in [3, (2.2)]. Note that we can derive the formula for x Ibs {(b−x)η ln(b− x)} in the same manner. Proposition 4.2. For any α ≥ 0 and θ ∈ (−1, 1), we have u(x) = |x − θ|α ln |x − θ| ∈ Wθα+1− (Ω),
∀ ∈ (0, 1).
(4.63)
APPROXIMATION BY CHEBYSHEV EXPANSIONS
21
Moreover, we have the following uniform bound of the Chebyshev expansion coefficients: [σ],{σ}
|ˆ uC n| ≤
Uθ Γ((n − σ − 1)/2) , 2σ−1 π Γ((n + σ + 1)/2)
(4.64)
−σ where σ := α + 1 − , and |ˆ uC for large n. n | ≤ Cn
Proof. Let m = [α] + 1 and ν = α − m. We derive from a direct calculation that u(k) (x) = (sgn(x − θ))k |x − θ|α−k dkα ln |x − θ| + fαk ),
k ≥ 0,
(4.65)
where dkα is the same as in (4.54), and fαk :=
k X j=1
(−1)j−1 Γ(k + 1)Γ(α + 1) . jΓ(k − j + 1)Γ(α − k + j + 1)
We see that u ∈ L1 (Ω) and u, · · · , u(m−1) ∈ AC(Ω). Next, using Lemma 4.2, we obtain that for x ∈ (θ, 1), 1−s (m) 1−s u =dm (x − θ)α−m ln(x − θ) + fαm θ Ix1−s (x − θ)α−m θ Ix α θ Ix Γ(ν + 1) ln(x − θ)(x − θ)ν+1−s Γ(ν + 2 − s) Γ(ν + 1) + ψ(ν + 1) − ψ(ν + 2 − s) + fαm (x − θ)ν+1−s . Γ(ν + 2 − s)
=dm α
Thus, if ν + 1 − s > 0, i.e., s < α + 1 − m, then θ Ix1−s u(m) ∈ BV(Ω+ θ ). Similarly, under the same 1−s (m) − condition, we have x Iθ u ∈ BV(Ωθ ). By the definition (4.1), we obtain u ∈ Wµθ (Ω), where µ = m + s < α + 1. This implies (4.63). The bound in (4.64) follows from (4.14) straightforwardly. Compared with Type-I, the singular function of Type-II possesses a slightly stronger singularity with an -difference in this fractional framework. It should be pointed out that the bound (4.64) appears nearly optimal for this type of logarithmic singularities, as one would expect O(n−α−1 ln n) (against O(n−(α+1) )). Below, we just justify this for θ = 0, but for θ 6= 0, it seems the derivation of the exact formula is much more involved. Proposition 4.3. Consider u(x) = |x|α ln |x|. Then u ˆC 2k+1 = 0, and n Γ(α + 1) Γ(k − α/2) απ απ u ˆC π cos + sin 2ψ(α + 1) 2k = 2α−2 π Γ(k + α/2 + 1) 2 2 o − 2 ln 2 − ψ(k − α/2) − ψ(k + α/2 + 1) , ∀ k ∈ N0 . Moreover, we have the asymptotic behaviour o Γ(α + 1) −α−1 n π απ απ u ˆC = k cos + sin ψ(α + 1) − ln 2 − ln k 2k 2α−3 π 2 2 2 απ + O(k −α−3 ln k) sin + O(k −α−3 ), k 1. 2
(4.66)
(4.67)
Proof. As u(x) is an even function, we have u ˆC 2k+1 = 0. Using (4.62), we derive from (4.58) that Z 1n |x|ε+α − |x|α o T2k (x) 2 √ lim u ˆC dx 2k = π −1 ε→0 ε 1 − x2 1n (ε + α)π Γ(ε + α + 1) Γ(k − (ε + α)/2) (4.68) = (−1)k lim sin ε→0 ε 2 2ε+α−1 π Γ(k + (ε + α)/2 + 1) απ Γ(α + 1) Γ(k − α/2) o − sin . 2 2α−1 π Γ(k + α/2 + 1)
22
W. LIU, L. WANG & H. LI
Noting that dn (ε + α)π Γ(ε + α + 1) Γ(k − (ε + α)/2) o sin dε 2 2ε Γ(k + (ε + α)/2 + 1) Γ(ε + α + 1) Γ(k − (ε + α)/2) n π (ε + α)π (ε + α)π = cos + sin 2ε Γ(k + (ε + α)/2 + 1) 2 2 2 o × ψ(ε + α + 1) − ln 2 − ψ(k − (ε + α)/2)/2 − ψ(k + (ε + α)/2 + 1)/2 , we obtain (4.66) from (4.68) and the L’Hospital’s rule immediately. Taking z = k − α/2 in (4.61), we obtain ln(k − α/2) −
1 1 < ψ(k − α/2) < ln(k − α/2) − , 2k − α k − α/2
which implies ψ(k − α/2) = ln k + O(k −1 ),
k 1.
(4.69)
Similarly, we have ψ(k + α/2 + 1) = ln k + O(k −1 ), k 1.
(4.70)
Recall that (see, e.g., [37, (5.11.13)]): for a < b, Γ(z + a) 1 = z a−b + (a − b)(a + b − 1)z a−b−1 + O(z a−b−2 ), Γ(z + b) 2
z 1,
(4.71)
which implies Γ(k − α/2) = k −α−1 1 + O(k −2 ) , k 1. Γ(k + α/2 + 1) From (4.66) and (4.69)-(4.72), we obtain (4.67).
(4.72)
Remark 4.3. Interestingly, if α is an even integer, we observe from (4.67) that the convergence rate is O(k −α−1 ), rather than O(k −α−1 ln k) for other values of α. 5. Improving existing results In this section, we show that the previous estimates with s → 1 improve the existing results on Chebyshev approximations in [43, 52, 44, 32]. 5.1. Some existing estimates. Define the Sobolev-type space (cf. [43]): n o 1 0 (m−1) (m) (m+1) 1 Wm+1 (Ω) := u ∈ L (Ω) : u, u , · · · , u ∈ AC(Ω), u ∈ BV(Ω), u ∈ L (Ω) , ω T
(5.1)
where ω = (1 − x2 )−1/2 is the Chebyshev weight function. Let k · kT be the Chebyshev-weighted 1-norm defined by Z 1 |u0 (x)| √ kukT = dx = ku0 kL1ω (Ω) . (5.2) 1 − x2 −1 Lemma 5.1. (cf. [43, Theorems 4.2-4.3]). If u ∈ Wm+1 (Ω) with integer m ≥ 0, then for each T n ≥ m + 1, C 2 VT u , (5.3) ˆn ≤ πn(n − 1) · · · (n − m) and for integer m ≥ 1, and integer N ≥ m + 1,
2 VT C
u − πN u L∞ (Ω) ≤ , (5.4) πm (N − m)m where VT = ku(m) kT = ku(m+1) kL1ω (Ω) .
APPROXIMATION BY CHEBYSHEV EXPANSIONS
23
It is noteworthy that the condition: u(m) ∈ BV(Ω) was not explicitly stated in [43, Theorems 4.2-4.3], which was added to the improved version in Trefethen [44]. Moreover, VT was replaced by the total variation of u(m) in [44, Thm 7.1] (where the proof was different, and based on the contour integral representation of uC n ). Following the argument of using certain telescoping series for the summations in [52], Majidian (cf. [32, Thm 2.1]) derived sharper bounds. For comparison, we quote the estimates therein below. Lemma 5.2. (cf. [32, Thm 2.1]). If u ∈ Wm+1 (Ω) with integer m ≥ 0, then for each n ≥ m + 1, T m C 2 VT Y 1 u , ˆn ≤ π j=0 n − m + 2j
(5.5)
where VT = ku(m+1) kL1ω (Ω) . Remark 5.1. It should be pointed out that the original bounds in [32] were stated as k Y 1 , if m = 2k, C 2 VT l=−k n + 2l u ˆn ≤ k+1 π 1 Y , if m = 2k + 1. n + 2l − 1
(5.6)
l=−k
In fact, letting j = l + k, we can rewrite the bounds in (5.6) as (5.5) simply.
5.2. Improved estimates. Theorem 5.1. Let u ∈ Wm+1 (Ω) with integer m ≥ 0, and denote VL = ku(m+1) kL1 (Ω) . (i) If n ≥ m + 1 and n − m is odd, then we have m C 2VL Y 1 u ˆn ≤ . π j=0 n − m + 2j
(5.7)
(ii) If n ≥ m + 1 and n − m is even, then we have C u ˆn ≤
m−1 Y 2VL 1 . 2 2 π n − m j=0 n − m + 2j − 1
√
(5.8)
(iii) If 0 ≤ n ≤ m + 1, then we have |ˆ uC n| ≤
2 ku(n) kL1 (Ω) . π(2n − 1)!!
(5.9)
Proof. For any u ∈ Wm+1 (Ω), we find from (4.12) (or (4.16) with one more step of integration by parts) that for n ≥ m + 1, Z 1 2 1 (m+1) (m+1) C (5.10) u ˆn = u (x) Gn−m−1 (x)ωm+1 (x) dx. (2m + 1)!! π −1 Thus, by (5.10), C u ˆn ≤
n o VL 2 (m+1) max ωm+1 (x)|Gn−m−1 (x)| . (2m + 1)!! π |x|≤1
If n = m + 2p + 1 with p ∈ N0 , we derive from (2.42a) with l = p and λ = m + 1 that n o Γ(m + 3/2)Γ(p + 1/2) (2m + 1)!! (2p − 1)!! (m+1) max ωm+1 (x)|Gn−m−1 (x)| ≤ √ = . (2m + 2p + 1)!! π Γ(m + p + 3/2) |x|≤1
(5.11)
(5.12)
24
W. LIU, L. WANG & H. LI
Consequently, for n = m + 2p + 1 with p ∈ Nn , we obtain from (5.11)-(5.12) that C 2 (2p − 1)!! VL 2 VL u ˆn ≤ = π (2p + 2m + 1)!! π (2p + 1) · (2p + 3) · · · (2p + 2m + 1) 2 VL = , π (n − m) · (n − m + 2) · · · (n + m)
(5.13)
which implies (5.7). Similarly, if n = m + 2p + 2 with p ∈ N0 , we derive from (2.42b) with l = p and λ = m + 1 that n o (2m + 1)!! (2p + 1)!! 1 (m+1) , max ωm+1 (x)|Gn−m−1 (x)| ≤ p (5.14) |x|≤1 (2p + 2)(2m + 2p + 1) (2m + 2p + 1)!! so by (5.11), we have C 2 1 VL u ˆn ≤ p π (2p + 2)(2m + 2p + 1) (2p + 3) · (2p + 5) · · · (2p + 2m + 1) =
VL 2 1 √ . 2 2 π n − m (n − m + 1) · (n − m + 3) · · · (n + m − 1)
(5.15)
This leads to (5.8). (n) In case of 0 ≤ n ≤ m + 1, we derive from (5.10) (with n = m + 1) and the factor G0 (x) ≡ 1 that Z 1 2 1 (n) (5.16) u ˆC = u (x) ωn (x) dx. n (2n − 1)!! π −1 Then we obtain (5.9) immediately.
For convenience of the analysis, we unify the bounds in (i)-(ii) of Theorem 5.1 without loss of the rate of convergence. In fact, this relaxation leads to the estimate (5.5) in [32, Thm 2.1], but with VL in place of VT . In other words, the bounds in Theorem 5.1 indeed improve the best available results. Corollary 5.1. Let u ∈ Wm+1 (Ω) with integer m ≥ 0, and denote VL = ku(m+1) kL1 (Ω) . Then we have that for all n ≥ m + 1, m 1 2VL Y . (5.17) |ˆ uC n| ≤ π j=0 n − m + 2j Proof. It is evident that by (5.7)-(5.8), we only need to prove this bound for n − m being even. One verifies readily the fundamental inequality: p n2 − (p − 1)2 ≥ (n2 − p2 )(n2 − (p − 2)2 ), 2 ≤ p ≤ n. Hence, if m is even, we can pair up the factors and use the above inequality with p = m, m−2, · · · , 2 to derive (n−m + 1)(n − m + 3) · · · (n − 1)(n + 1) · · · (n + m − 3)(n + m − 1) = (n − (m − 1)2 )(n2 − (m − 3)2 ) · · · (n2 − 1) p p p p ≥ n2 − m2 n2 − (m − 2)2 n2 − (m − 2)2 n2 − (m − 4)2 · · · p = n2 − m2 (n − m + 2) · · · (n + m − 2).
(5.18)
Similarly, if m is odd, we remain the middle most factor √ intact and pair up the factors to derive the above. Therefore, multiplying both sides of (5.18) by n2 − m2 , we obtain √
m−1 m Y Y 1 1 1 ≤ . 2 2 n − m j=0 n − m + 2j − 1 j=0 n − m + 2j
Then (5.17) follows from (5.19) and (i)-(ii) of Theorem 5.1 directly.
(5.19)
APPROXIMATION BY CHEBYSHEV EXPANSIONS
25
As a numerical example, we consider u = |x − θ|, θ ∈ (−1, 1) to compare upper bounds of u ˆC n . In 00 2 −1/2 this case, we have m = 1, u = 2δ(x − θ), VL = 2 and VT = 2(1 − θ ) . Let Ratio1 and Ratio2 be the ratios of upper bounds in [44, 32] (cf. (5.3) with VT being replaced by the bounded variation of u0 , and the bound in (5.5)) and our improved bound in Theorem 5.1, respectively. In Figure 5.1, we depict two ratios against various n for two values of θ. We see that the improve bound is sharper than the existing ones, and the removal of the Chebyshev weight in VT is also significant for the sharpness of the bounds.
1
1
0.9
0.9
0.8
0.8
Ratio 1
Ratio 1
Ratio 2
Ratio 2
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4 10
20
30
40
50 n
60
70
80
90
100
10
20
30
40
50 n
60
70
80
90
100
Figure 5.1. Ratios of the existing bounds and improved bound herein for u = |x − θ| and θ ∈ (−1, 1). Left: θ = 1/2. Right: θ = 4/5. To conclude this section, we state below the improved L∞ -estimates. Theorem 5.2. Let u ∈ Wm+1 (Ω) with integer m ≥ 0. (i) If 1 ≤ m ≤ N, then C ku − πN ukL∞ (Ω) ≤
2 mπ
Y m
(m+1) 1
u
1 . L (Ω) N − m + 2j − 1 j=1
(ii) If m = 0, then for all integer N ≥ 1,
C
u − πN u L∞ (Ω) ≤ u0 L1 (Ω) .
(5.20)
(5.21)
(iii) If m ≥ N + 1, then C ku − πN ukL∞ (Ω) ≤
m X 2 (2N + 1)!! (n+1) cn ku kL1 (Ω) , (2N + 1)!! π (2n + 1)!!
(5.22)
n=N
where cn = 1 for all N ≤ n ≤ m − 1 and cm = 2. Proof. From Theorem 4.2 with s → 1 and (4.17), we obtain that for 1 ≤ m ≤ N + 1,
Γ((N − m + 1)/2)
u(m+1) 1 L (Ω) Γ((N + m + 1)/2) Y m
(m+1) 2 2 (N − m − 1)!! 1 (m+1)
u
1 . u = = L1 (Ω) L (Ω) mπ (N + m + 1)!! mπ j=1 N − m + 2j − 1
C ku−πN ukL∞ (Ω) ≤
This gives (5.20).
1
2m−1 mπ
(5.23)
26
W. LIU, L. WANG & H. LI
We now prove (5.21). Using integration part parts leads to ∞ ∞ Z π X X C (u − πN u)(x) = u ˆC T (x) = u(cos ϕ) cos(nϕ)dϕ cos(nθ) n n =
n=N +1 ∞ X
2 π
2 = π
n=N +1
Z
n=N +1 Z π 0
0
π
0
cos(nθ) u0 (cos ϕ) sin(nϕ) sin ϕdϕ n
(5.24)
u (cos ϕ) sin(nϕ) Ψ∞ N (ϕ, θ) dϕ,
0
so we have Z 2 ∞ π 0 C (u − πN u)(x) ≤ max ΨN (ϕ, θ) |u (cos ϕ)| sin ϕ dϕ π ϕ∈[0,π] 0 0 2 = max Ψ∞ N (ϕ, θ) ku kL1 (Ω) , π ϕ∈[0,π]
(5.25)
for x = cos θ, θ ∈ (0, π), where Ψ∞ N (ϕ, θ) =
∞ ∞ X X sin(nϕ) cos(nθ) sin(n(ϕ + θ)) + sin(n(ϕ − θ)) = . n 2n
n=N +1
(5.26)
n=N +1
We next show that for ϑ ∈ R, ∞ X sin(nϑ) π ≤ . n 2
(5.27)
n=N +1
In fact, it suffices to derive this bound for ϑ ∈ (0, π), as the series defines an odd, 2π-periodic function which vanishes at ϑ = 0, π. It is known that ∞ X sin(nϑ) π−ϑ = , n 2 n=1
ϑ ∈ (0, π).
(5.28)
According to [4], we have that for N ≥ 2, N X sin(nϑ) 0< ≤ α(π − ϑ), n n=1
ϑ ∈ (0, π),
(5.29)
with the best possible constant α = 0.66395 · · · . As a direct consequence of (5.28)-(5.29), we have ∞ X sin(nϑ) π−ϑ − α (π − ϑ) ≤ < , 2 n 2
1
(5.30)
n=N +1
and X ∞ sin(nϑ) π − ϑ < , n 2
(5.31)
n=N +1
for N ≥ 2 and ϑ ∈ (0, π). In fact, the bound (5.31) also holds for N = 1, as by (5.28), ∞ X sin(nϑ) π−ϑ π−ϑ = − sin ϑ < . n 2 2 n=2
Thus, we complete the proof of (5.27). The estimate (5.21) is a direct consequence of (5.25)-(5.27).
APPROXIMATION BY CHEBYSHEV EXPANSIONS
27
Finally, we turn to the proof of the estimate (5.22). For m ≥ N + 1, we use (5.9) to bound m {ˆ uC n }n=N +1 , and use Theorem 5.20 (with N → m) to derive C C C C u(x) − πN u(x) u(x) − πN u(x) + u(x) − πm u(x) ≤ πm m X
2 2 ku(n) kL1 (Ω) + ku(m+1) kL1 (Ω) π(2n − 1)!! m(2m − 1)!!π n=N +1 X m 2 2m + 1 (2N + 1)!! (m+1) (2N + 1)!! (n) ≤ ku kL1 (Ω) + ku kL1 (Ω) π(2N + 1)!! (2n − 1)!! m (2m + 1)!! ≤
(5.32)
n=N +1
m X 2 cn (2N + 1)!! (n+1) ≤ ku kL1 (Ω) , π(2N + 1)!! (2n + 1)!! n=N
where cn = 1 for all N ≤ n ≤ m − 1 and cm = 2.
In summary, by taking a different route, we improve the existing bounds in the following senses. (i) The Chebyshev-weighted 1-norm in Lemma 5.2 is replaced by the Legendre-weighted 1norm, i.e., the usual L1 -norm. (ii) Sharper bound is obtained for the case m = 2k + 1 in (5.6). (iii) We obtain the “stability” result, that is, m = 0 in (5.4), and the estimate for the case n ≤ m + 1 in (5.9), which are new. 6. Dealing with endpoint singularities The previous discussions were centred around the Chebyshev expansions and approximation of singular functions with interior singularities. In this section, we extend the results to the cases with θ = ±1, and study endpoint singularities. To fix the idea, we shall focus on the exact formulas and decay rate of the Chebyshev expansion coefficients, as it is the basis to derive many other related error bounds. Let Wm+s (Ω) be the fractional Sobolev-type spaces defined in (4.7). The following representation ± C of u ˆn is a direct consequence of Theorem 4.1. Theorem 6.1. If u ∈ Wσ+ (Ω) with σ := m + s, s ∈ (0, 1) and m ∈ N0 , then for n ≥ σ > 1/2, nZ 1 o 1−s (m) R s (m) l (σ) l (σ) u ˆC = −C D u (x) G (x) ω (x) dx + I u (x) G (x) ω (x) . (6.1) σ σ x 1 σ n x 1 n−σ n−σ x=1− −1
Similarly, if u ∈ Wσ− (Ω) with σ := m + s, s ∈ (0, 1) and m ∈ N0 , then for n ≥ σ > 1/2, nZ 1 o R s (m) r (σ) 1−s (m) r (σ) u ˆC = C D u (x) G (x) ω (x) dx + I u (x) G (x) ω (x) . (6.2) σ σ −1 σ n −1 x x n−σ n−σ x=−1+ −1
Here, Cσ := √
1 , π 2σ−1 Γ(σ + 1/2)
ωλ (x) = (1 − x2 )λ−1/2 .
(6.3)
We next apply the formulas to several typical types of singular functions. We first consider u(x) = (1 + x)α with α > −1/2 and α 6∈ N0 (see, e.g., [45, 47]). Following the proof of Proposition 4.1, we have u ∈ Wα+1 − (Ω). Then using (6.2), one obtains the exact formula of the Chebyshev expansion coefficient. Equivalently, one can derive this formula by taking θ → −1+ in (4.50). More precisely, by (2.22b) and (4.50), (α+1) Γ(α + 1) (α+1) √ lim + rGn−α−1 (θ)ωα+1 (θ) − (−1)[α] l Gn−α−1 (θ)ωα+1 (θ) u ˆC n = α 2 Γ(α + 3/2) π θ→−1 (6.4) Γ(α + 1) (α+1) √ = α lim + rGn−α−1 (θ)ωα+1 (θ). 2 Γ(α + 3/2) π θ→−1
28
W. LIU, L. WANG & H. LI
Using (2.25) leads to that for λ > 1/2, 2λ−1 lim ωλ (x) rG(λ) ν (x) = −2
x→−1+
sin(νπ) Γ(λ − 1/2)Γ(λ + 1/2)Γ(ν + 1) . π Γ(ν + 2λ)
(6.5)
Therefore, from (4.39) and (6.4)-(6.5), we obtain the formula: u ˆC n =
(−1)n+1 sin(πα)Γ(2α + 1) Γ(n − α) , 2α−1 π Γ(n + α + 1)
n ≥ α + 1,
(6.6)
−2α−1 and for large n, we have |ˆ uC ). n | = O(n
With the aid of (6.6), we next consider a more general case: u(x) = (1 + x)α g(x) with g(x) being α a sufficiently smooth function. Here, we need to use the formula of uC n for (1 + x) with n < α + 1. Taking m = n in (4.18) and using the property of the Beta function, yields Z 1 1 (n) C u(n) (x) G0 (x)ωn (x) dx u ˆn = √ n−1 π2 Γ(n + 1/2) −1 Z 1 1 Γ(α + 1) (6.7) = √ n−1 (1 + x)α−n (1 − x2 )n−1/2 dx π2 Γ(n + 1/2) Γ(α − n + 1) −1 =√
2α+1 Γ(α + 1)Γ(α + 1/2) . πΓ(α − n + 1)Γ(α + n + 1)
Using the Taylor expansion of g(x) at x = −1, we obtain from (6.6)-(6.7) that [n−1−α]
X
u ˆC n = +
l=0 ∞ X
g (l) (−1) (−1)n+1 sin(π(α + l))Γ(2α + 2l + 1) Γ(n − α − l) l! 2α+l−1 π Γ(n + α + l + 1)
l=[n−α]
=
g (l) (−1) 2α+l+1 Γ(α + l + 1)Γ(α + l + 1/2) √ l! πΓ(α + l − n + 1)Γ(α + l + n + 1)
(6.8)
(−1)n+1 g(−1) sin(πα)Γ(2α + 1) −2α−1 n + O(n−2α−3 ), 2α−1 π
where we used (4.71). Remark 6.1. One can find asymptotic formulas similar to (6.6) and (6.8) in [45], which were obtained from a very different approach. Observe from Proposition 4.1 and the above formulas that an order O(n−α ) is gained for the Cheyshev expansion coefficient when the singular point is at x = ±1. Finally, we consider the singular function: u(x) = (1 + x)α ln(1 + x). Using (4.62), we derive from (4.58) and the L’Hospital’s rule that Z 2 1n (1 + x)ε+α − (1 + x)α o Tn (x) C √ lim u ˆn = dx π −1 ε→0 ε 1 − x2 2 1 n sin(π(α + ε))Γ(2α + 2ε + 1)Γ(n − α − ε) = (−1)n+1 lim π ε→0 ε 2α+ε Γ(n + α + ε + 1) sin(πα)Γ(2α + 1)Γ(n − α) o − 2α Γ(n + α + 1) n+1 (−1) Γ(2α + 1)Γ(n − α) n = π cos(απ) + sin(απ) 2ψ(2α + 1) π2α−1 Γ(n + α + 1) o − ln 2 − ψ(n + α + 1) − ψ(n − α) .
APPROXIMATION BY CHEBYSHEV EXPANSIONS
29
Similar to Remark 4.3, we have the asymptotic behaviour (−1)n+1 Γ(2α + 1) −2α−1 n u ˆC n π cos(απ) + sin(απ) 2ψ(2α + 1) − ln 2 n = π2oα−1 − 2 ln n + O(n−2α−3 ln n) sin(απ) + O(n−2α−3 ). Concluding remarks We introduced a new theoretical framework for orthogonal polynomial approximations of functions with limited regularities (or interior/endpoint singularities). The proposed fractional Sobolevtype spaces were naturally arisen from the analytic representations of the expansion coefficients involving RL fractional integrals/derivatives and GGF-Fs. In the first part of this subject, we restricted our attentions to Chebyshev expansions, but the analysis could be extended to the related interpolation and quadratures, and to general Jacobi cases. In late parts, we also intend to apply this framework to multiple dimensions and problems with corner singularities. With the explicit information in the fractional spaces, we are confident that this can be a superior alternative to the existing tools for characterising singular functions. References [1] [2] [3] [4] [5] [6] [7] [8] [9]
[10]
[11] [12]
[13] [14] [15] [16] [17] [18] [19] [20]
M. Abramovitz and I.A. Stegun. Handbook of Mathematical Functions. Dover, New York, 1972. R.A. Adams. Sobolev Spaces. Academic Press, New York, 1975. H. Alzer. On some inequalities for the Gamma and Psi functions. Math. Comput., 66(217):373–389, 1997. H. Alzer and S. Koumandos. Sharp inequalities for trigonometric sums. Math. Proc. Camb. Phil. Soc., 139:139– 152, 2003. G.E. Andrews, R. Askey, and R. Roy. Special Functions, Encyclopedia if Mathematics and int Applications, Vol. 71. Cambridge University Press, Cambridge, 1999. I. Babuˇska, B.A. Szabo, and I.N. Katz. The p-version of the finite element method. SIAM J. Numer. Anal., 18(3):515–545, 1981. I. Babuˇska and M. Suri. The hp-version of the finite element method with quasi-uniform meshes. RAIRO Model. Math. Anal. Numer., 21:199–238, 1987. I. Babuˇska and B.Q. Guo. Optimal estimates for lower and upper bounds of approximation errors in the p-version of the finite element method in two dimensions. Numer. Math., 85(2):219–255, 2000. I. Babuˇska and B.Q. Guo. Direct and inverse approximation theorems for the p-version of the finite element method in the framework of weighted Besov spaces. Part I: Approximability of functions in the weighted Besov spaces. SIAM J. Numer. Anal., 39(5):1512–1538, 2001. I. Babuˇska and B.Q. Guo. Direct and inverse approximation theorems for the p-version of the finite element method in the framework of weighted Besov spaces, Part II: Optimal rate of convergence of the p-version finite element solutions. Math. Models Methods Appl. Sci., 12(5):689–719, 2002. C. Bernardi and Y. Maday. Spectral Methods. In P.G. Ciarlet and J.L. Lions, editors, Handbook of Numerical Analysis, Vol. V, Part 2, pages 209–485. North-Holland, Amsterdam, 1997. L. Bourdin and D. Idczak. A fractional fundamental lemma and a fractional integration by parts formulaapplications to critical points of Bolza functionals and to linear boundary value problems. Adv. Differential Equations, 20(3–4):213–232, 2015. J.P. Boyd. Large-degree asymptotics and exponential asymptotics for Fourier, Chebyshev and Hermite coeffiients. J. Eng. Math., 63:355–399, 2009. J.P. Boyd and R. Petschek. The asymptotic Chebyshev coefficients ior functions with logarithmic enclpoint singularities: mappings ad singular basis functions. Appl. Math. Comput., 29:49–67, 1989. J. Bustoz and M.E.H. Ismail. On Gamma function inequalities. Math. Comput., 47(176):659–667, 1986. C. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang. Spectral Methods: Fundamentals in Single Domains. Springer, Berlin, 2006. C. Canuto, R.H. Nochetto, and M. Vrani. Contraction and optimality properties of adaptive Legendre-Galerkin methods: the one-dimensional case. Comput. Math. Appl., 67(4):752–770, 2014. S. Chen, J. Shen, and L.L. Wang. Generalized Jacobi functions and their applications to fractional differential equations. Math. Comp., 85(300):1603–1638, 2016. K. Diethelm. The Analysis of Fractional Differential Equations, Lecture Notes in Math., Vol. 2004. Springer, Berlin, 2010. D. Funaro. Polynomial Approxiamtions of Differential Equations. Springer-Verlag, Berlin, 1992.
30
W. LIU, L. WANG & H. LI
[21] P. Glazyrinaa and S. Tikhonov. Jacobi weights, fractional integration, and sharp Ulyanov inequalities. J. Approx. Theory, 195:122–140, 2015. [22] I.S. Gradshteyn and I.M. Ryzhik. Table of Integrals, Series and Products, 8th Ed. Academic Press, New York, 2015. [23] P. Grisvard. Singularities in Boundary Value Problems. Masson, Paris, 1992. [24] B.Y. Guo, J. Shen, and L.L. Wang. Optimal spectral-Galerkin methods using generalized Jacobi polynomials. J. Sci. Comput., 27(1-3):305–322, 2006. [25] B.Y. Guo, J. Shen, and L.L. Wang. Generalized Jacobi polynomials/functions and their applications. Appl. Numer. Math., 59(5):1011–1028, 2009. [26] B.Y. Guo and L.L. Wang. Jacobi approximations in non-uniformly Jacobi-weighted Sobolev spaces. J. Approx. Theory, 128(1):1–41, 2004. [27] J. Hesthaven, S. Gottlieb, and D. Gottlieb. Spectral Methods for Time-Dependent Problems. Cambridge University Press, Cambridge, 2007. [28] P. Houston, C. Schwab, and E. S¨ uli. Stabilized hp-finite element methods for first-order hyperbolic problems. SIAM J. Numer. Anal., 37:1618–1643, 2000. [29] F.C. Klebaner. Introduction to Stochastic Calculus with Applications, 2nd Ed. Imperial College Press, London, 2005. [30] I. Krasikov. On the Erd´ elyi-Magnus-Nevai conjecture for Jacobi polynomials. Constr. Approx., 28(2):113–125, 2008. [31] G. Leoni. A First Course in Sobolev Spaces. Amer. Math. Soc., Providence, RI, 2009. [32] H. Majidian. On the decay rate of Chebyshev coefficients. Appl. Numer. Math., 113:44–53, 2017. [33] S.P. Mirevski, L. Boyadjiev, and R. Scherer. On the Riemann-Liouville fractional calculus, g-Jacobi functions and F -Gauss functions. Appl. Math. Comput., 187:315–325, 2007. [34] P. Nevai, T. Erd´ elyi, and A.P. Magnus. Generalized Jacobi weights, Christoffel functions, and Jacobi polynomials. SIAM J. Math. Anal., 25(2):602–614, 1994. [35] S. Nicaise. Jacobi polynomials, weighted Sobolev spaces and approximation results of some singularities. Math. Nachr., 213:117–140, 2000. [36] F.W.J. Olver. Asymptotics and Special Functions. Academic Press, New York, 1974. [37] F.W.J. Olver, D.W. Lozier, R.F. Boisvert, and C.W. Clark. NIST Handbook of Mathematical Functions. Cambridge University Press, New York, 2010. [38] I. Podlubny. Fractional Differential Equations, volume 198 of Mathematics in Science and Engineering. Academic Press Inc., San Diego, CA, 1999. [39] S.G. Samko, A.A. Kilbas, and O.I. Marichev. Fractional Integrals and Derivatives, Theory and Applications. Gordan and Breach Science Publisher, New York, 1993. [40] C. Schwab. p- and hp-FEM. Theory and Application to Solid and Fluid Mechanics. Oxford University Press, New York, 1998. [41] J. Shen, T. Tang, and L.L. Wang. Spectral Methods: Algorithms, Analysis and Applications, volume 41 of Series in Computational Mathematics. Springer-Verlag, Berlin, Heidelberg, 2011. [42] G. Szeg¨ o. Orthogonal Polynomials, 4th Ed. Amer. Math. Soc., Providence, RI, 1975. [43] L.N. Trefethen. Is Gauss quadrature better than Clenshaw-Curtis? SIAM Rev., 51(1):67–87, 2008. [44] L.N. Trefethen. Approximation Theory and Approximation Practice. SIAM, Philadelphia, 2013. [45] P.D. Tuan and D. Elliott. Coefficients in series expansions for certain classes of functions. Math. Comp., 26:213– 232, 1972. [46] V.A. Kozlov, V.G. Maz0 ya, and J. Rossmann. Elliptic Boundary Value Problems in Domains with Point Singularities. Amer. Math. Society, Providence, RI, 1997. [47] W. Gui and I. Babuˇska. The h, p and h-p versions of the finite element method in 1 dimension, Part I: The error analysis of the p-version. Numer. Math., 49:205–612, 1986. [48] W. Gui and I. Babuˇska. The h, p and h-p versions of the finite element method in 1 dimension, Part II: The error analysis of the h- and h-p Versions. Numer. Math., 49:613–657, 1986. [49] H.Y. Wang. Convergence rate and acceleration of Clenshaw-Curtis quadrature for functions with endpoint singularities. arXiv:1401.0638v2, 2014. [50] R.L. Wheeden and A. Zygmund. Measure and Integral, An Introduction to Real Analysis, 2nd Ed. CRC Press, Boca Raton, 2015. [51] S.H. Xiang and F. Bornemann. On the convergence rates of Gauss and Clenshaw-Curtis quadrature for functions of limited regularity. SIAM J. Numer. Anal., 50(5):2581–2587, 2012. [52] S.H. Xiang, X.J. Chen, and H.Y. Wang. Error bounds for approximation in Chebyshev points. Numer. Math., 116:463–491, 2010. [53] Y. Xu. Approximation by polynomials in Sobolev spaces with Jacobi weight. arXiv:1608.04114v1, 2016. [54] M. Zayernouri and G.E. Karniadakis. Fractional Sturm-Liouville eigen-problems: theory and numerical approximation. J. Comput. Phys., 252:495–517, 2013.