THE APPROXIMATE INVERSE IN ACTION WITH AN APPLICATION TO COMPUTERIZED TOMOGRAPHY ANDREAS RIEDERy AND THOMAS SCHUSTERzx
Abstract. The approximate inverse is a scheme to obtain stable numerical inversion formul for linear operator equations of the rst kind. Yet, in some applications the computation of a crucial ingredient, the reconstruction kernel, is time-consuming and instable. It may even happen that the kernel does not exist for a particular semi-discrete system. To cure this dilemma we propose and analyze a technique that is based on a singular value decomposition of the underlying operator. The results are applied to the reconstruction problem in 2D-computerized tomography where they enable the design of reconstruction lters and lead to a novel error analysis of the ltered backprojection algorithm. Key words. Approximate inverse, molli cation, Radon transform, computerized tomography, ltered backprojection AMS subject classi cations. 65J10, 65R10
1. Introduction. The approximate inverse is a regularization scheme which ap-
plies especially to under-determined (semi-discrete) systems. Yet, in some applications the numerical computation of the necessary reconstruction kernel is timeconsuming and instable. It may even happen that does not exist for a particular semi-discrete system. However, the reconstruction kernel of the underlying in nite dimensional (continuous) problem may be at hand. In this paper we propose a procedure to nd a substitute for from and we show that this procedure is sound. Following we recall the concept of the approximate inverse which belongs to the class of molli er methods as considered, for instance, by Murio [19]. In a systematic way the approximate inverse generalizes a technique used by Grunbaum [5] and Davison and Grunbaum [3] for tomographic inversion. Let A : X ! Y be continuous and injective operator between the real or complex in nite dimensional Hilbert spaces X and Y . We want to nd a f 2 X such that discrete
discrete
discrete
(1.1)
An f = gn
where An : X ! C n and gn 2 C n are de ned via a mapping n : Y ! C n by An = nA and gn = n g with g 2 R(A), the range of A. Let us assume { for the time being { that An is continuous. The above setting describes most practical situations where the data can be recorded only in nitely many observation points. Problem (1.1) is under-determined and we only can search for its minimum-norm solution fny , that is, (1.2)
An Anfny = An gn and fny 2 N(An )?:
y Institut fur Wissenschaftliches Rechnen und Mathematische Modellbildung (IWRMM), Univer-
sitat Karlsruhe, 76128 Karlsruhe, Germany, email:
[email protected] z Fachbereich Mathematik, Geb. 36, Universitat des Saarlandes, 66041 Saarbrucken, Germany, email:
[email protected] x supported by Deutsche Forschungsgemeinschaft under grant Lo310/4-1 1
2
A. RIEDER AND TH. SCHUSTER
Here N(An )? is the orthogonal complement of the null space of An . If the range of A is non-closed in Y , that is, the generalized inverse of A is unbounded, instabilities appear very likely in computing fny directly from (1.2) under erroneous data gn . This reasoning lead Louis and Maa [13] to the approximate inverse where one tries to reconstruct moments of f : hf; ein iX , i = 1; : : : ; m, with suitable molli ers ein . In case X = L2 ( ), a domain in Rd , one can think of the ein 's as smooth approximations to -distributions located at points xi 2 . The computations of the moments is achieved by approximating ein in the range of An . To any ein we associate a reconstruction kernel ni 2 C n by minimizing the defect kAn ni ? ein kX , that is, ni solves the normal equation (1.3)
AnAn ni = An ein :
The above equation for ni is independent of the data gn , therefore, free of noise from measurement errors. We call (ein ; ni ) a molli er/reconstruction kernel pair for An . The operator Sn : C n ! C m , (1.4)
(Sn h)i = hh; ni iC n ; i = 1; : : : ; m;
is called approximate inverse of An . Hence, Sn gn is an approximate solution of (1.1). Lemma 1.1. If gn = An f then
(1.5)
(Sn gn )i = hfny ; ein iX ; i = 1; : : : ; m:
Proof. The reconstruction kernels satisfy An ni = Pn ein where Pn : X ! X is the orthogonal projector onto R(An ) = N(An )?. Hence,
(Sn gn )i = hf; An ni iX = hPn f; ein iX :
Since Pn f = fny , see (1.2), we are nished with the proof.
An interpretation of the approximate inverse as regularization scheme and further details are given by Louis [12]. He also shows how invariances of A improve the eciency, see Remark 5.2 below. For several reasons we wish to avoid solving (1.3): An An may be densely populated and ill-conditioned, increasing n calls for a complete new computation of the kernels, invariances of A do not show in An An in general. We propose the following technique to approximate n (we will drop the superscript i whenever considering a single pair (en ; n ) ). Suppose (e; ) is a molli er/reconstruction kernel pair for A, i.e., A = e (A is injective!). Then, we expect n to be an approximate solution of (1.3) where en is equal or close to e. In Section 3.1 we show convergence of n to a solution of (1.3). We also analyze the situation when the molli er e is not in R(A ) (Section 3.2). Here we approximate n by n where A is close to en . We further discuss a technique to construct from e which can be implemented. In some applications, for instance, if A is the Radon transform, An : D(An ) X ! C n is unbounded and An does not exist, see Section 5. Consequently, the concept of approximate inverse cannot be applied to (1.1). Louis and Schuster [16] replaced A by
APPROXIMATE INVERSE IN ACTION
3
a truncated singular value decomposition, thus circumventing the problem. We favor another cure which is closely related our ndings for a bounded An (Section 4). In Section 5 we apply the results from the previous sections to the reconstruction problem in 2D-computerized tomography, mainly to illustrate our rather abstract results by a concrete application. As a byproduct we achieve a novel error estimate for the ltered backprojection algorithm as well as an alternative to design reconstruction lters. To start this paper we introduce our technical set-up in the next section. Especially, the operator An is de ned precisely. In the appendix we prove an auxiliary mapping property of the Radon transform. Hegland and Anderssen [6] investigated a molli cation method being akin to our approximate inverse approach. However, the details are completely dierent and they require stronger conditions on A, for instance, A?1 has to be densely de ned. Further, an implementation of their method requires an explicit knowledge of the pre-images (under A) of the chosen basis functions. On the other hand, Hegland and Anderssen relate the regularization parameter (support width of the molli er) to the discretization step size to bound the noise ampli cation error. This is an issue we do not address here. 2. Preliminaries. We specify our technical assumptions that are required to hold throughout the paper if not indicated otherwise. The operator A is supposed to have the mapping property (2.1). Let there be Banach spaces X1 and Y1 such that the embeddings X1 ,! X as well as Y1 ,! Y are continuous, injective, and dense. Moreover,
A : X1 ! Y1 is continuous.
(2.1)
Let Y10 be the dual to Y1 . One may consider the spaces X1 and Y1 as abstract smoothness classes in X and Y , respectively. We are now able to de ne the observation operator n : Y1 ! C n precisely: given n functionals n;k 2 Y10, k = 1; : : : ; n, let ( n v)k := h n;k ; viY 0 Y ; k = 1; : : : ; n;
(2.2)
1
1
where h; iY 0 Y is the duality pairing on Y10 Y1 . In applications we have in mind, typically, Y1 will be a Sobolev space of sucient order such that point evaluations are continuous. It will prove useful to transform equation (1.1) into an equivalent equation where C n is replaced by a suitable subspace of Y , see (2.6) below. To this end we introduce a family fVn gn2N of nite dimensional subspaces of Y being nested: Vn Vn+1 . Furthermore, each Vn is spanned by basis elements 'n;k , k = 1; : : : ; n, which build a Riesz system w.r.t. Y , that is, 1
(2.3)
1
n X k=1
jak j2
n
X
ak
k=1
2
'n;k
Y
n X k=1
jak j2 for all n 2 N :
Our notation A B indicates the existence of a generic constant c > 0 such that A c B . The constant c will not depend on the arguments of A and B . This means that the constants involved in (2.3) do not depend on n.
4
A. RIEDER AND TH. SCHUSTER
The P spaces C n and Vn are related one-to-one by the operator Qn : C n ! Vn , Qna := nk=1 ak 'n;k . The composition of n and Qn creates a new operator n : Y1 ! Vn as follows n v := Qn n v =
n X k=1
h
n;k ; viY10 Y1 'n;k :
The operator n relates the observation operator n to Vn . Considered as an operator mapping Y1 into Y , n is assumed to be uniformly bounded in n: (2.4)
knkY !Y 1 as n ! 1: 1
Our last ingredient is the approximation property (2.5): let there be a sequence fn g [0; 1] converging monotonically to zero such that
kv ? nvkY n kvkY for all v 2 Y1 as n ! 1: We understand fn g as optimal, that is, fn g is the fastest converging admissible
(2.5)
1
sequence in (2.5). Now we apply Qn from the left to both sides of equation (1.1) yielding
Aen f = egn where Aen = Qn An : X1 ! Vn and gen = Qn gn . (2.6)
For the solution of (1.1) and (2.6), respectively, by the approximate inverse we distinguish two scenarios. First, we assume that An : D(An ) X ! C n is bounded where D(An ) := X1 is the domain of de nition of An . Thus, An 2 L(X; C n ). Typical examples are integral operators which are suciently smoothing. R
! L2(0; 1), Af (x) := 01 k(x; y) f (y) dy, where the kernel k is such that A : L2 (0; 1) ! H 1=2+" (0; 1) is bounded for an " > 0. On Example 2.1. Let A : L2 (0; 1)
the Sobolev space H 1=2+" point evaluations are continuous functionals, so ng = n?1=2 (g(x1 ); : : : ; g(xn ))t , xi 2P]0; 1[, is the right choice if we areR 1able to observe Af at xi . Thus, An w(y) = n?1=2 i k(xi ; y) wi and (An An )i;j = n?1 0 k(xi ; y) k(xj ; y) dy.
Second, we consider An : D(An ) X ! C n unbounded. Hence, the Hilbert space adjoint of An cannot be de ned on all of C n (otherwise An would have been continuous already). Here the worst case is D(An ) = f0g, so that the approximate inverse is not de ned meaningful for (1.1). This happens for the Radon transform, see x5.
3. Bounded semi-discrete operators An: approximating the discrete reconstruction kernel. Let (2.1) hold true with X1 = X (topologically): (3.1) A : X ! Y1 is continuous, that is, An 2 L(X; C n ) and Aen 2 L(X; Y ). In the sequel we will denote the adjoint of A : X ! Y by A . Now we study convergence of the minimum norm solution fny of (1.1) as n ! 1.
From this we derive a kind of pointwise convergence of the approximate inverse Sn .
5
APPROXIMATE INVERSE IN ACTION
Lemma 3.1. If (3.1) then kA ? Aen kX !Y
Proof. Since
Aen x = nAx
n kAkX !Y . 1
for x 2 X one needs to apply (3.1) only.
Theorem 3.2. Let fny (1.2) be the minimum norm solution of (1.1) with gn = An f
for f 2 X . Then,
lim f ? fny X = 0: n!1 Moreover, if the sequence of molli ers fein gn2N converges to ei 2 X , i = 1; : : : ; m, we have that (3.2) lim S A f = Ef n!1 n n where (3.3) E : X ! C m is de ned by (Ef )i := hf; ei iX ; i = 1; : : : ; m: Proof. Recall that fny = Pn f where Pn : X ! X is the orthogonal projection onto N(An )? = N(Aen )?. Due to Lemma 3.1 and the injectivity of A we have that T n2N N(Aen ) = f0g. This yields the pointwise convergence of Pn to the identity operator in X as n ! 1 thereby proving the rst assertion. The second assertion follows readily from (1.5).
Choosing special molli ers ein we will show below that Sn An f ? Ef 1 n kf kX as n ! 1, see Corollary 3.8. For an e 2 X we have either e 2 R(A ) or e 2 @ R(A ) due to the injectivity of A (@ R(A ) is the topological boundary of R(A )). The rst situation leads to reconstruction kernels satisfying A = e. In Section 3.1 below we shall show that n is an approximate solution of (1.3) for suitable en . If we cannot nd a molli er e in the range of A , the equation A y = e has no least squares solution. Thus, no reconstruction kernel is associated to e. We investigate the latter situation in Section 3.2. 3.1. The special case e 2 R(A). In a rst lemma we derive a relation between the reconstruction kernels for An and Aen . Lemma 3.3. Let (e; en ) be a molli er/reconstruction kernel pair for Aen where e 2 X is arbitrary. Then, (e; Qn en ) is a molli er/reconstruction kernel pair for An. Proof. The assertion follows from An Qn en = Aen en = Pn e where Pn is as in the
proof of Theorem 3.2.
Below we will need the Gramian matrix Gn 2 C nn relative to f'n;1 ; : : : ; 'n;n g. This matrix has entries (Gn )i;j = h'n;i ; 'n;j iY . A quick calculation validates the equality Gn n z = Qn n z for all z 2 Y1 . Theorem 3.4. Adopt all assumptions speci ed in x2 and assume (3.1). Let (en ; en ) be a molli er/reconstruction kernel pair for Aen where en = Aen , 2 Y1 , and en 2 N(Aen )?. Then,
(3.4)
kGn n ? Qnenk C n n kkY + infe k ? ykY 1
y 2 R(An )
as n ! 1. Note that (en ; Qn en ) is a molli er/reconstruction kernel pair for An .
6
A. RIEDER AND TH. SCHUSTER
Proof. Since kQn kY ! C n 1 by (2.3) we may estimate kGn n ? Qnenk C n kQnn ? Qnk C n + kQn ? Qnenk C n
kn ? kY + k ? enkY n kkY + k ? enkY 1
where we used (2.5) in the nal step. The assertion will be proved if we bound k ? enkY by a multiple of inf f k ? ykY j y 2 R(Aen) g. Recall that en is the unique solution in N(Aen )? of the normal equation Aen Aen en = Aenen = AenAen: (3.5) Let Pn : Y ! Y be the orthogonal projector onto N(Aen )? . Since Pn solves (3.5) as well, we obtain en = Pn . As N(Aen )? = R(Aen ) we proceed with k ? enkY = k ? PnkY = infe k ? ykY y 2 R(An )
which completes the proof. Corollary 3.5. The assumptions are those from Theorem 3.4. If either 2 R(A) or all An 's are onto then kGn n ? Qnenk C n n kkY as n ! 1: 1
Proof. First we consider 2 R(A). Let = Az for z 2 X . Now, inf k ? ykY kAz ? n Az kY n kAz kY y 2 R(Aen )
1
by the approximation property (2.5). Second, if An : X ! C n is onto we have that R(Aen ) = Vn which gives k ? ykY k ? nkY n kkY : infe k ? ykY = y inf 2V y 2 R(An )
1
n
In both cases the assertion follows from (3.4). Even so en = Aen converges to e = A due to Lemma 3.1, en may be an unsuitable molli er for xed (possibly small) n. It seems natural to work with e in the semi-discrete setting also. This more general situation is considered in the following lemma where we, however, allow a weighted norm in C n . Under the assumptions of Lemma 3.6 below, kAn An kC n is a norm on C n being, in general, weaker than the Euclidean norm in the following sense. There exist positive constants n and ? such that n kz kC n kAn An z kC n ? kz kC n for all z 2 C n where ? does not depend on n and where n tends to zero as n grows. Lemma 3.6. Let e = A for 2 Y1 . Further, let (e; n ) be a molli er/reconstruction kernel pair for Aen where n 2 N(Aen )? . Under the assumptions of Theorem 3.4
and provided all An 's are onto we have that kAnAn(Gn n ? Qnn)kC n n kkY
1
as n ! 1:
7
APPROXIMATE INVERSE IN ACTION
Proof. Let en be as in Theorem 3.4. Hence,
kAnAn(Gn n ? Qnn)kC n kAk2X !Y kGn n ? QnenkC n + kAn Aen en ? An Aen n kC n n kkY + kAen Aenen ? AenAennkY 1
1
where we used Corollary 3.5, (2.3), and the estimate (3.6)
kAn AnkC n !C n = kAnk2X !C n knAk2X !Y kAk2X !Y
1
by (2.3) and (2.4). Since Aen Aen en = Aen Aen and Aen Aen n = Aen A we obtain that
kAen Aenen ? Aen AennkY kAen ? A kY !X kkY :
The assertion of Lemma 3.6 is now due to Lemma 3.1. We discuss the implications of the Corollary 3.5 on the approximate inverse Sn of An (1.4). Here one has m molli er/reconstruction kernel pairs (ein ; ni ), i = 1; : : : ; m, see (1.3). Now let ein = Aen i where i 2 Y1 , i = 1; : : : ; m. Our investigations from above suggest to replace the (unknown) approximate inverse Sn by the (computable) operator n de ned by (n b)i = hb; Gn n i i C n ; i = 1; : : : ; m:
(3.7)
As a direct consequence of Corollary 3.5, we can show that n is a reasonable substitute for Sn . Theorem 3.7. The assumptions are those from Theorem 3.4. Further, let (ein ; ni ), i = 1; : : : ; m, be molli er/reconstruction kernel pairs for An where ein = Aen i. Assume
that all i 's are in Y1 . If either all i 's are in R(A) or all An 's are onto, then
(3.8)
kSnAnf ? nAn f k1 n 1max ki kY kf kX as n ! 1: im 1
Proof. Let (ein ; eni ) be the molli er/reconstruction kernel pair for Aen where eni 2 From Lemma 3.3 we know that (ein ; Qn eni ) is a molli er/reconstruction kernel n pair for An . Note that Qneni may be dierent from the kernel ni used in Sn , however, Anni = An Qneni . Thus,
N(Ae )?.
(Sn An f )i = hf; An ni iX = hf; An Qn eni iX = hAn f; Qn eni iC n
which implies that
j(SnAnf )i ? (nAnf )ij = jhAn f; Qneni ? Gn niiC n j kAnf kC n kQn eni ? Gn ni kC n : The estimate (3.8) follows now from (3.6) and from Corollary 3.5. The following fact on the convergence speed of the approximate inverse is worthwhile to mention, compare (3.2).
8
A. RIEDER AND TH. SCHUSTER
Corollary 3.8. We have that
ki kY kSnAnf ? Ef k1 n kf kX 1max im
1
as n ! 1:
Proof. By the triangle inequality and by (3.8) it suces to show that kn An f ?
Ef k1 n kf kX max1im ki kY . This is obtained from ? n An f ? hf; ei iX = h n Af; Gn n i i C n ? hf; A i iX i = hn g; n i iY ? hg; i iY 1
where g = Af . The dierence on the right hand side may now be estimated as follows:
hng; n i iY ? hg; i iY kng ? gkY kn i kY + kn i ? i kY kgkY n kgkY ki kY n kf kX ki kY 1
1
1
where we used the uniform boundedness (2.4), the approximation property (2.5) and the continuity (3.1). 3.2. The general case e 2 X . The range of A is dense in X due to the injectivity of A. Therefore, we will only assume that the molli er can be approximated arbitrarily close by an element in R(A ). Let ei 2 X be molli ers for i = 1; : : : ; m. To any "i > 0 we can nd a i 2 Y1 so that (3.9) kei ? Ai kX "i ; i = 1; : : : ; m: Below we will demonstrate how to get i from ei knowing a singular value decomposition of A. Since, in general, no reconstruction kernel is associated to ei there will be no counterparts of Theorems 3.4 and 3.7, respectively. Instead, we are directly heading towards an estimate of nAn f ? Ef . Based on the ei 's and the i 's from above the operators E (3.3) and n (3.7) are well de ned. Theorem 3.9. Adopt the assumptions speci ed in x2 and assume (3.1). Let the operators E and n be de ned as in (3.3) and (3.7), respectively, where ei 2 X and i 2 Y1 are related by (3.9). Then,
(3.10)
n An f
?
? Ef 1 n 1max ki kY + 1max " kf kX as n ! 1: im im i 1
Proof. By the triangle inequality and by (3.9) we get ? n An f
i i i i ? hf; e iX h n Af; Gn n i C n ? hf; A iX + kf kX "i :
We may now proceed as in the proof of Corollary 3.8 We will now discuss the vital issue of constructing i 2 Y1 from ei 2 X which satisfy (3.9) for "i arbitrarily small. For convenience let us suppress the superscript i. The tool we employ is a singular value decomposition (SVD) of the operator A. In medical imaging SVDs are explicitly known, see, e.g., [9, 10, 15, 17, 18, 21].
9
APPROXIMATE INVERSE IN ACTION
Let A : X ! Y be a compact operator and let fvk ; uk ; k j k 2 N 0 g be its singular system, that is, 1 X Ax = k hx; vk iX uk : k=0
The sets of singular functions fvk g and fuk g are orthonormal bases in X (A is injective) and R(A), respectively. The positive numbers k are the singular values of A satisfying limk!1 k = 0 (monotonically). The singular functions and the singular values are related via Avk = k uk and A uk = k vk : We assume that all uk 's are in Y1 . For an arbitrary e 2 X we follow the approach of Dietz [4] and de ne
M :=
(3.11)
MX ?1 k=0
k?1 he; vk iX uk
which is an element of Y1 . Dietz [4] implemented (3.11) to solve the cone beam reconstruction problem in 3D utilizing the formula of Grangeat. Obviously, (3.12)
ke ? A M k2X =
1 X
k=M
jhe; vk iX j2 ! 0 as M ! 1:
Incorporating an abstract smoothness assumption on e, we are able to give convergence rates of ke ? A M kX as M ! 1. ? ? Lemma 3.10. Suppose that e 2 R (A A) = D (A A)? for a 0. Then, lim ? ke ? A M kX = 0: M !1 M
Moreover, the following error estimate holds:
ke ? AM kX < M
Proof. We have that
ke ? A M k2X =
1 X k=M
p
kekX k(A A)? ekX :
k?2 jhe; vk iX j k2 jhe; vk iX j
1 X k=M
1 X 1=2 ? 4 2 2 jhe; vk iX j M
k
and both assertions follow readily.
k=M
jhe; vk iX j2
1=2
In view of (3.10) we realize that controlling the "i 's tells only half of the story. To learn the whole story we look at kM kY . 1
? ? Lemma 3.11. Suppose that e 2 R (A A) = D (A A)? for a 0. Further, let there exist a 0 such that kuk kY1 k? for all k. Then, ?1 1=2 M X 4?2(1+ ) : k k k(A A)? ek
M Y1
X
k=0
k
10
A. RIEDER AND TH. SCHUSTER
Proof. The straightforward estimates
kM kY 1
MX ?1 k=0
k?2 jhe; vk iX j k2?(1+ )
?1 M X k=0
?1 1=2 M X 4?2(1+ ) 1=2
k?4 jhe; vk iX j2
k=0
k
verify the claim. Theorem 3.12. Let A : X ! Y be compact with singular system fvk ; uk ; k j k 2 g. Assume that k? (k + 1)? for a > 0 as k ! 1 (a b abbreviates a b a) and that kuk kY k for 0.
N0
1
Assume the hypotheses of Theorem 3.9, in particular, let ?the operators E and n i are be de ned as in (3.3) and (3.7), respectively, where ei 2 D (A A)? and M i related by (3.11). If > (1 + )=2 + 1=(4 ) and Mi = Mi (n) n?1=( ) as n ! 1 (n from (2.5)) then
(3.13)
n An f
? Ef 1 n kf kX 1max k(A A)? eikX as n ! 1: im
Proof. Since kei kX k(A A)? ei kX we have that
"i = kei ? A Mi i kX M i k(A A)? ei kX (Mi + 1)? k(A A)? ei kX n k(A A)? eikX by Lemma 3.10 and our assumption on Mi = Mi (n) as n ! 1. Further, by Lemma 3.11,
kMi i kY k(A A)?ei kX 1
?
1 X k=0
?
(k + 1)? 4?2(1+ )
1=2
where the series converges due to 4 ? 2(1 + ) > 1. Recalling Theorem 3.9 we are nished with the proof of (3.13).
4. Unbounded semi-discrete operators An. Here we consider (2.1) where X1
is a proper subspace of X with a stronger topology. As we will see in the next section it may happen that An : X1 X ! C n is unbounded. In the extremest case we even have to deal with D(An ) = f0g, that is, the approximate inverse with respect to the topology in X is not de ned for (1.1). Basically, this leaves us with the situation already investigated in x3.2. Indeed, if (ei ; i ) 2 X Y1 , i = 1; : : : ; m, are molli er/reconstruction kernel pairs satisfying (3.9) then E (3.3) as well as n (3.7) are well de ned. Even for unbounded operators An both Theorems 3.9 and 3.12 remain valid with a slight modi cation: we have to assume that f 2 X1 . In (3.10) as well as in (3.13) we have to replace kf kX by kf kX . 1
11
APPROXIMATE INVERSE IN ACTION
5. Application to the reconstruction problem in 2D-computerized tomography. We apply our abstract results of the former sections to the reconstruction
problem in 2D-computerized tomography, that is, the reconstruction of a function from its line integrals. For further applications of our results in vector and local tomography we refer to [24] and [22], respectively. The underlying operator is the Radon transform R mapping a function f 2 L2 ( ) to its line integrals. Here, is the unit ball in R2 centered at the origin. More precisely, Z (5.1) f (x) d(x): Rf (s; #) := L(s;#) \
The lines are parameterized by L(s; #) = f !?(#) + s !(#) j 2 Rg where s 2]?1; 1[, !(#) = (cos #; sin #)t and !?(#) = (? sin #; cos #)t for # 2]0; [. By this parameterization of lines we are dealing with the parallel scanning geometry. The Radon transform maps X = L2 ( ) continuously to Y = L2 (Z ) where Z := ]?1; 1[]0; [, see, e.g., Natterer [20, Chap. II.1]. In the appendix we will verify the following mapping property, see Theorem A.2 below: R : H0( ) ! H +1=2 (Z ) is continuous for any 0: The involved Sobolev spaces are de ned as follows. By H0 ( ) we denote the closure functions with compact support in , of C01 ( ), the space of in nitely dierentiable R ? with respect to the norm kf k2 = R 1 + k k2 jfb( )j2 d . Here, fb is the Fourier transform of f . The space H (Z ) = W2 (Z ) is an L2 -Sobolev space de ned on the rectangular domain Z , see, e.g., Wloka [25]. Since point evaluations are continuous linear functionals on H (Z ) for > 1 we set X1 = H01=2+ ( ) and Y1 = H 1+ (Z ) for a > 0, cf. (2.1). For q; p 2 N let hs = 1=q and h# = =p be the discretization step sizes and set si = i hs, i = ?q; : : : ; q, and #j = j h# , j = 0; : : : ; p. Let ` 2 f1; 2g. With this index ` we will be able to distinguish between two dierent settings using a compact notation. (`) given by To the pairs (si ; #j ) we associate the Dirac-distributions i;j 2
h i;j(`) ; giY 0 Y := &i;j(`) g(si; #j ); i = ?q; : : : ; q`; j = 0; : : : ; p`; for any g 2 H 1+ (Z ) where q1 = q ? 1, q2 = q, p1 = p ? 1 and p2 = p. The 1
1
&i;j(`) 's are normalization factors to be de ned below in (5.3). We de ne the mapping (`) 's. The respective dimensions (q;p`) : H 1+ (Z ) ! Rn` according to (2.2) using the i;j are n1 = 2qp and n2 = (2q + 1)(p + 1).
R(q;p`)? =: (q;p`) R : H01=2+( ) L2 ( ) ! Rn` is `) ) = f0g. unbounded for any > 0. Moreover, D (R(q;p Proof. We construct a sequence ffr gr2N H01=2+ ( ) with kfr kL ( ) 1 and kR(q;p`) fr kRn` ! 1 as r ! 1. Theorem 5.1. The operator
2
We will de ne fr as the tensor product of two univariate functions r and r . Let fr g and f r g be monotonically decreasing zero sequences with 0 < r < 1, 0 < r < 1=2, and 2r + (1 ? r )2 < 1. p p 1= 2r for Let r 2 C01 (?r ; r ) with values in [0; 1= 2r ] such that r (t) = p jtj r =2. Similarly, let r 2 C01(?1 + r ; 1 ? r ) with values in [0; 1= 2(1 ? r )]
12
A. RIEDER AND TH. SCHUSTER p
such that r (t) = 1= 2(1 ? r ) for jtj 1 ? 2 r . Both functions can be constructed explicitly using a partition of unity, see, e.g., Wloka [25, Chap. 1.2]. For fr (x) := r (x1 ) r (x2 ) we have 0 < kf kL ( ) = kr kL (R) kr kL (R) 1 and supp fr [?r ; r ] [?1 + r ; 1 ? r ] because 2r + (1 ? r )2 < 1. Thus, fr 2 H01=2+ ( ) for any > 0. Now consider Rfr at s0 = 0 and #0 = 0: 2
jRfr (s0; #0 )j =
Z
R
fr (0; t) dt = r (0)
r (0)
Z 1?2 r
2
Z
r (t) dt
R
r (t) dt =
?1+2 r
2
p
1 ? 2 r r!1 ?! 1: r (1 ? r )
Hence, kR(q;p`) fr kRn` ! 1 as r ! 1. We are now going to verify the second statement of Theorem 5.1. To this end observe that limr!1 Rfr (si ; #j ) = 0 if (i; j ) 6= (0; 0). This limit holds since supp fr `converges' to the line segment L(0; 0) \ . The construction principle from above can be repeated for any pair (si ; #j ), jsi j < 1, leading to a sequence of functions ffri;j gr2N H01=2+ ( ) with kfri;j kL ( ) 1 and ( 1 : (k; l) = (i; j ) i;j (s ; # ) = lim R f : k l r r!1 0 : otherwise 2
?
Assume that 0 6= w 2 D (R(q;p`) ) . Then, the linear functional f 7! R(q;p`) f; w Rn` is continuous on D(R(q;p`) ) with respect to the L2 ( )-topology. With wi;j 6= 0 we obtain
(`) i;j f ;
Rq;p
r
w
Rn`
(`) Rf i;j (s ; # ) + = wi;j &i;j r i j
X
(k;l) k;l)6=(i;j )
(`) Rf i;j (s ; # ) wk;l &k;l r k l
(
(`) f i;j ; w which implies that limr!1 R q;p r ? (`) edness contradicts w 2 D (Rq;p ) .
Rn`
= sgn(wi;j ) 1. However, this unbound-
Due to Theorem 5.1 the approximate inverse cannot be applied to the 2D-reconstruction problem: given gq;p 2 Rn` nd f 2 L2 ( ) such that R(q;p`) f = gq;p : Here we are facing the situation from x4, that is, we have to replace the `non-existing' Sq;p by q;p, compare (1.3), (1.4) and (3.7), respectively. Canonical candidates for approximation spaces related to (q;p`) are the tensor prod(`) = S (`) S (`) , ` = 1; 2. Here, S (`) and S (`) are either the piecewise uct spline spaces Vq;p s s # # constant (` = 1) or linear (` = 2) spline spaces w.r.t. the knot sequences fsi g and f#j g, (`) we choose the tensor product B-spline basis respectively. As basis in Vq;p (5.2)
n
o
(`) B (`) =& (`) ? q i q ; 0 j p : Bq;i ` ` p;j i;j
(`) 2 S (`) and B (`) 2 S (`) are uniquely determined by ( is the The B-splines Bq;i s D p;j # indicator function of the set D) (1) = Bq;i [si ;si [ ; +1
(1) = Bp;j [#j ;#j [ ; +1
APPROXIMATE INVERSE IN ACTION
and
13
(
(
(2) (# ) = 1 : j = l (2) (s ) = 1 : i = k ; ; Bp;j Bq;i l k 0 : otherwise 0 : otherwise (`) are just the L2 -norms of the B-splines: respectively. The normalization factors &i;j
(`) B (`) k (5.3) &i;j(`) := kBq;i p;j L (Z ) ; i = ?q; : : : ; q` ; j = 0; : : : ; p` : Thus, the normalized tensor product B-spline basis (5.2) is an L2 (Z )-Riesz system where the constants in the corresponding norm equivalence do not depend on hs or h# , compare (2.3). (`) which links V (`) We next de ne the interpolation operator (q;p`) : H 1+ (Z ) ! Vq;p q;p ( ` ) to q;p: 2
(q;p`) v :=
p` q` X X
p` q` X X
i=?q j =0
i=?q j =0
(`) B (`) =& (`) = ( (q;p`) v)i;j Bq;i p;j i;j
(`) B (`) : v(si ; #j ) Bq;i p;j
Let h = maxfhs ; h# g. Then the uniform boundedness k(q;p`) vkL (Z ) kvkH (Z ); > 1; as well as the approximation property kv ? (q;p`) vkL (Z ) hminf;`g kvkH (Z ) ; > 1; hold true whenever the right hand sides are nite. Both estimates are standard results from spline approximation theory, see, e.g., Schumaker [23, Chap. 12]. In the following we apply our results of Section 3.2 to the 2D-reconstruction problem. In a rst step we therefore construct reconstruction kernels from molli ers using a SVD of the Radon transform. Unfortunately, a SVD of R : L2 ( ) ! L2 (Z ) is not known explicitly. However, it can be shown that the Radon transform maps L2 ( ) e w?1 ) where Z e =] ? 1; 1[]0; 2 [, see, e.g., Natterer [20, Chap. IV.3]. compactly to L2 (Z; p The weight function is given by w ( s ) := 1 ? s2 and acts on the rst variable only. Let vm;l ; um;l ; m m 2 N 0 ; l 2 Z; jlj m; m + l 2 2Z be the singular system e w?1 ), that is, of R : L2 ( ) ! L2 (Z; 2
2
(5.4)
Rf =
1 X
m ? X
m = 0 l = ?m
m hf; vm;l iL ( ) um;l 2
where ? restricts the summation over those l's with m + l 2 2Z. Later on we will need an explicit representation of the m 's and the um;l 's only. We therefore give analytic expressions: r (5.5) m = 2 m + 1 and um;l (s; ') = 1 w(s) Um(s) e{ l ' ? where Um (s) = sin (m + 1) arccos s = sin(arccos s), m 2 N 0 , are the Chebyshev polynomials of the second kind. For the vm;l 's see Louis [11] or Natterer [20]. Denoting by R and R# the adjoints of R : L2 ( ) ! L2 (Z ) and R : L2 ( ) ! 2 e w?1 ), respectively, we have that L (Z; 2 R w?1 um;l = R# um;l = m vm;l :
14
A. RIEDER AND TH. SCHUSTER
The rst equality can be Rchecked by straightforward calculations. Given a molli er e 2 L2 ( ) normalized by e(x) dx = 1 and centered about the origin we de ne
M := 2
MX ?1 X m ? m?1 m = 0 l = ?m
which then gives
kR M ? ek2L ( ) =
(5.6)
2
he; vm;l iL ( ) w?1 um;l 2
1 X
m ? X
m = M l = ?m
jhe; vm;l iL ( ) j2 ; 2
compare (3.11) and (3.12). Let us assume from now on that the molli er e is a radial function, that is, e(x) = e(kxkR ). Since he; vm;l iL ( ) = 0 for l 6= 0 the representation of M simpli es to 2
2
M = 2
(5.7)
(MX ?1)=2
k=0
2?k1 he; v2k;0 iL ( ) w?1 u2k;0 : 2
Hence, the reconstruction kernel does not depend on the angle #. Moreover M is an even function in s as so are the Chebyshev polynomials of even degree. See Figure 5.1 for an example. 300 800 200 600 100
400
200 0.05
0.1
0.15
0.2 0.01
0.02
0.03
0.04
0.05
Figure 5.1: Reconstruction kernels (left) and radial parts of the related (right). Solid ? molli ers curves: reconstruction kernel (5.7) corresponding to e(x) = 5 ? p kxkR2= = with = 0:05 where p(t) = (1 ? t ) for t 1 and p(t) = 0 otherwise. From a numerical point of view we have R = e. Dashed curves: reconstruction kernel (5.14) corresponding to the Gauian ? e(x) = (2 )? ? exp ? ? kxkR2=2 with = 0:013. 501
2 4
501
1
2
2
2
2
Please note that both kernels are negative in [0:2; 1] and monotonically increasing.
Let xi 2 , i = 1; : : : ; m, be points in which we would like to reconstruct moments hf; ei iL ( ) from the data gq;p. The molli ers ei are derived from e by translation and dilation: ei () = T1xi e() := 41 e ?2 xi : At the present time the choice of the dilation factor 2 seems to be arti cial, however, it will become clear in the proof of Lemma 5.3 below. The invariance property t (5.8) RT2xi = T1xi R where T2xi (s; #) := 41 s ? x2i !(#) ; # 2
APPROXIMATE INVERSE IN ACTION
15
i associated to ei by suggests to de ne the reconstruction kernel M t Mi (s; #) := T2xi M (s; #) = 41 M s ? x2i !(#) ; i = 1; : : : ; m: Thus, R Mi = T1xi R M ?! T1xi e = ei as M ! 1: Remark 5.2. Thanks to the invariance property (5.8) only the kernel M has to be computed and stored. The kernels for the reconstruction points xi are simply found by the action of T2xi on M .
Lemma 5.3. Let v 2 H r (Z ), r 0. Then,
kT2xvkH r (Z ) kvkH r (Z ) uniformly in x 2 :
(5.9)
?
Proof. The transformation (s; #) := (s ? xt !(#))=2; # maps Z to Z 0 bijectively where n h io t t Z 0 = (; ') 2 Z 2 ?1 ? x2 !(') ; 1 ? x2 !(') : Moreover, is a C 1 -dieomorphism with det J (s; #) = 1=2 where J is the Jacobian of . Since T2x v = v =4 the assertion follows from transformation results for Sobolev norms, see, e.g., Wloka [25].
De ne E : L2 ( ) ! Rm by (Ef )i := hf; ei iL ( ) , i = 1; : : : ; m, see (3.3), and ! Rm by 2
(q;p`) : Rn`
? (`) b
q;p i
i := b; G(q;p`) (q;p`) M
Rn`
; ` = 1; 2;
(`) . Especially, see (3.7). Here, G(q;p`) is the Gramian matrix w.r.t. the B-spline basis in Vq;p G(1) q;p is the identity matrix. The process of evaluating (q;p`) gq;p coincides with (and may be implemented exactly as) the ltered backprojection algorithm in computerized tomography with lter function M , see, e.g., Natterer [20, Chap. V.1]. Indeed, for ` = 1, ? (1) g
q;p q;p i
q?1 X p?1 t X g(sl ; #j ) M sl ? x2i !(#j ) : = 4qp l=?q j =0
A re-formulation of Theorem 3.12 in the present context results therefore in a novel error analysis of the ltered backprojection algorithm (Theorem 5.4 below). Compared to already known error estimates, see, e.g., Natterer [20, Th. V.1.1], we allow mild smoothness assumptions on the density distribution f . Further, the kernel needs only to be known approximately. The error bound re ects clearly the in uence of the smoothness of f and e on the convergence rate.
2 H01=2+( ) for 0 < 1. Assume that the radial molli er > 4 + 2 . Let ` = minf1 + ; `g for ` = 1; 2. e If M = M (h) h?2 ` = then
(`) (`)
Rf ? Ef h` kf k1=2+ kek (5.10) as h ! 0: q;p q;p 1 Theorem 5.4. Let f
is in H0 ( ) for
16
A. RIEDER AND TH. SCHUSTER
Proof. We follow the line of proof of Theorem 3.9 to obtain ? (`) (`)
i q;p q;p Rf i ? hf; e iL2 ( ) ? (q;p`) (q;p`) Rf i ? hf; R Mi iL2 ( ) + hf; R Mi ? eiiL2 ( ) ? = (q;p`) (q;p`) Rf i ? hRf; T2xi M iL2 (Z ) + hf; T1xi (R M ? e)iL2 ( ) ? kf k1=2+ h` kM kH 1+(Z ) + kR M ? ekL2 ( )
where we used the invariance property (5.8) as well as (5.9). Since
k(R# R)? ekL ( ) kek ; 2
see Lemma A.3 below, we immediately infer from (5.6) and from the proof of Lemma 3.10 that
kR M ? ekL ( ) M kek (M + 1)?=2 kek h` kek : It remains to bound kM kH (Z ) , see (5.7). We will be guided by the proof of 2
1+
Lemma 3.11. Using the interpolation inequality for Sobolev norms, see, e.g., Lions and Magenes [7, Chap. 2.5], we may estimate as follows
kw?1 u2k;0kH
(Z )
1+
kw?1 u2k;0k1H?(Z ) kw?1 u2k;0kH (Z ) = kU2k k1H?(?1;1) kU2k kH (?1;1) : 2
1
2
1
A bound on the Sobolev norms of the Chebyshev polynomials may be obtained by Markov's inequality (5.11), see, e.g., Lorentz [8, Chap. 3.3]: let Pr be a polynomial of degree r then (5.11) jPr0 (s)j r2 ?max jP (t)j; jsj 1: 1t1 r With max ?1t1 jUr (t)j = r + 1 we easily nd that
kUr kH (?1;1) (r + 1)3 r?6 and kUr kH (?1;1) (r + 1)5 r?10 1
2
which result in
kw?1 u2k;0kH
(Z )
1+
2?k2 (3+2) :
Recalling the representation (5.7) of M we get
kM kH
(Z )
1+
?1)=2 (MX
?4
2k k=0 k(R# R)? ek
jhe; v2k;0iL ( )
L2 ( )
2
kek :
?1)=2 1=2 (MX 1=2 2 j (2k + 1)?2+7+4 k=0
We used the fact that the second sum is bounded in M since 2 ? 7 ? 4 > 1. The proof of Theorem 5.4 is now complete. Because h = maxfhs ; h# g it is most ecient { in view of (5.10) { to work with discretization step sizes hs and h# which coincide: hs = h# , that is, p = q. So we
17
APPROXIMATE INVERSE IN ACTION
recovered the optimal sampling relation for the parallel scanning geometry, see, e.g., Natterer [20, Chap. III]. In the remainder of this section we comment brie y on another way to design reconstruction kernels for the Radon transform, see (5.13) below. This approach is based on the inversion formula (5.12) of the Radon transform, (5.12) e = (2)?1 R I?1 Re for e 2 H0 ( ); 1=2; see, e.g., Natterer [20, Chap. II.2]. The operator I?1 : H01 (?1; 1) ! L2 (R) is the Riesz ?1 f )( ) = j j fb( ). In (5.12), the Riesz potential acts on the rst variable potential: (I\ of Re. Motivated by (5.12) we make the ansatz := I?1 Re=(2). Assuming radial symmetry of e the latter formula may be expressed as Z 1 ? 1 (5.13) eb !(0) cos(s ) d; (s) =
0
compare Natterer [20, (1.5) on p. 103]. ? For instance, let e be the Gauian e(x) = (2)?1 ?2 exp ? ?2 kxk2R =2 , > 0. Clearly, these molli ers are not supported in . However, for small, they decay fast enough to consider them elements of H01=2 ( ). Thus, Z 1 ? 1 (s) = 22 exp ? 2 2 =2 cos(s ) d 0 Z 1 ? 1 d exp ? ? 2 2 =2 cos(s ) d: = 22 2 0 d Now applying integration by parts and using formul (7.4.7) and (7.1.3) from [1] yields r p ? ? 1 (5.14) (s) = 22 2 1 + 2 s exp ? s2 =(2 2 ) { erf { s=( 2 ) p R where erf(t) = (2= ) 0t exp(?z 2 ) dz is the error function. Figure 5.1 displays (5.14) for = 0:013 (dashed curves). 2
Remark 5.5. We recommend the lter design methods from above whenever one wants to impose certain conditions on the molli er, e.g., non-negativity and compact support, see Figure 5.1. The widely used Shepp-Logan lter and its non-compactly supported molli er have frequent sign changes. To avoid artifacts in the reconstructions these oscillations require a certain ne-tuning: the dilation parameter , compare (5.14), needs to be selected carefully. In contrary, the reconstructions based on the lters from Figure 5.1 are more robust with respect to the support width of the molli er. A. Appendix: a Sobolev space estimate of the 2D-Radon transform. In this appendix we will show that the Radon transform (5.1) maps H0 ( ) boundedly to Hp+1=2 (Ze), 0, where Ze =] ? 1; 1[]0; 2[. The space Hp (Ze) is a Sobolev space of periodic functions. Let g 2 L2 (Ze) be expressed in its Fourier series, that is, Z X X g(s; ') = gbk;n e{ ( k s + n ') ; bgk;n = 41 e g(s; ') e?{ ( k s + n ') d' ds: Z k2Z n2Z
Then, g 2 Hp (Ze), 0, i the norm X X? kgk2p; = 1 + k2 + n2 ) jgbk;n j2 k2Z n2Z
18
A. RIEDER AND TH. SCHUSTER
is nite. Remark A.1. Interpreting periodic functions in L2 (Ze) as functions de ned on
the torus T R3 we may identify Hp (Ze) with the Sobolev space H (T ) de ned on the smooth compact manifold T by means of local coordinates, see, e.g., Wloka [25].
In proo ng our main result in Theorem A.2 below we will bene t from a known Sobolev space estimate (A.1) for the Radon transform due to Louis and Natterer [14], see also [20, Chap. II.5]. Let H ( ;0) (Ze), > 0, be the tensor product H0 (?1; 1) b L2 (0; 2) (for the tensor product of Sobolev spaces see, e.g., Aubin [2]) then, for 0,
kRf kH
(A.1)
=;
( +1 2 0)
kf k for all f 2 H0( ):
(Ze)
In view of Remark A.1 our estimate (A.2) below is intrinsically dierent from a result of Natterer which looks similar at a rst glance, see [20, Chap. II, Th. 5.3]. Theorem A.2. The Radon transform maps H0 ( ) continuously to Hp+1=2 (Ze),
0, that is,
kRf kp;+1=2 kf k for all f 2 H0 ( ):
(A.2)
? Proof. Let g = Rf . Since (1 + k2 + n2 ) 2 (1 + k2 ) + (1 + n2 ) for 0 and for all k; n 2 Z we have that
kgkp;+1=2 A(g) + B (g) with
A(g)2 =
X
(1 + k2 )+1=2 jgbk;n j2 and B (g)2 =
k;n 2 Z
X
(1 + n2 )+1=2 jgbk;n j2 :
k;n 2 Z
We will bound A(g) as well as B (g) by a multiple of kf k . Both relations X
n2Z
and
Z 2 Z 1 1
jgbk;nj2 =
2 ?1
0
Z 1
1 2 ?1 k2Z follow from Parseval's identity. Hence, X
A(g)2
jgbk;nj2 =
= 41
Z 2 X
0
kgk2H
=;
2
g(s; ') e?{ n ' d' ds Z 1
(1 + k2 )+1=2
k2Z
( +1 2 0)
Z 2
0
2
g(s; ') e?{ k s ds d'
(Ze)
kf k2
?1
2
g(s; ') e?{ k s ds d'
where the rst inequality follows from a Sobolev norm equivalence given by Natterer in [20, Chap. VII, Lem. 4.4]. The second inequality comes from (A.1).
19
APPROXIMATE INVERSE IN ACTION
Estimating B (g) is a little bit more involved. From the singular value expansion (5.4) of g = Rf we deduce that 1 2
Z 2
0
g(s; ') e?{ n '
1 X 1 d' =
m=0
m ? X
gm;l w(s) Um (s) 21 l = ?m |
Z 2
0
1 X = 1 gjnj+2;n w(s) Ujnj+2 (s)
e?{ (n?l) ' d' {z
= n;l
}
=0
with gm;l = m hf; vm;l iL ( ) . Thus, 2
Z 1 X 1
2
gjnj+2;n w(s) Ujnj+2 (s) ds (1 + n2 )+1=2 B (g)2 = 12 ?1 =0 n2Z X
Z 1 X 1 X 1 2 +1 = 2 2 gjnj+2;n (1 + n ) ?1 =0 n2Z 1 X X jgjnj+2;nj2 (1 + n2 )+1=2 = 21 =0 n2Z p
because f 2= w() Um () m 2 Further,
B (g)2
X
1 X
2 2 w(s) U ?1 ( s ) jnj+2 w (s) ds
is an orthonormal basis in L2 (] ? 1; 1[; w?1 ).
(1 + jnj)2+1
n2Z X
N
r
1 X =0
j2nj+2 jhf; vjnj+2;niL ( ) j2 2
jhf; v 2 j?n4j+2 jnj+2;n iL ( ) j
n 2 Z =0 1 X m ? X m = 0 l = ?m
2
m?4 jhf; vm;l iL ( ) j2 2
In Lemma A.3 below we bound the latter double sum by a multiple of kf k2 which nally proves (A.2). Lemma A.3. The operator (R# R)? : H0 ( ) ! L2 ( ), 0, is continuous, that is, 1 X m ? X # ? 2 (A.3) m?4 jhf; vm;l iL2 ( ) j2 kf k2 k(R R) f k0 = m = 0 l = ?m
for all f 2 H0 ( ). Proof. Since C01 ( ) is dense H0 ( ) it suces to consider f 2 C01 ( ). Let us start with = 2 r + 1=2, r 2 N 0 . We have
hf; vm;l iL ( ) = m?1 hf; R# um;l iL ( ) = m?1 hRf; um;l iL (Z;w e ? ): Further, Rf (; ') 2 C01 (?1; 1) for any ' 2 [0; 2], see, e.g., Natterer [20]. We estimate the inner product on the right hand side of (A.4) Let g' (s) = Rf (s; '). Integration (A.4)
2
2
2
1
20
A. RIEDER AND TH. SCHUSTER
by parts yields Z 1
?1
g'(s) Um (s) ds =
Z
0
?
g' (cos #) sin (m + 1) # d# Z
= m?+1 1
0
?
g'0 (cos #) sin # cos (m + 1) # d#
where we used that g' (?1) = g' (1) = 0. Repeating integration by parts 2r + 1-times we obtain Z 1
?1
Z
g'(s) Um (s) ds = (m + 1)?2r?1
0
?
r (#; ') cos (m + 1) # d#
P2r+1 ? @ i i=1 @si
Rf (cos #; ') pi?1(#) where pi?1 is a real trigonowith r (#; ') = sin # metric polynomial of degree i ? 1 at most. Thus, ?1 ?2r?1 hg; um;l iL (Z;w e ? ) = (m + 1) 2
implying
1
Z 2 Z
|0
0
?
r (#; ') cos (m + 1) # d# e?{ l ' d' {z
=: cm;l (Rf )
}
jhf; vm;l iL ( ) j2 m8r+2 jcm;l (Rf )j2 = m4 jcm;l (Rf )j2 2
by (5.5) and (A.4). Summing up results in 1 X
(A.5)
m ? X
m = 0 l = ?m
?4 m
j2
jhf; vm;l iL ( ) 2
cos(n #) e{ l ' = n
1 X
m ? X
m = 0 l = ?m
jcm;l (Rf )j2 :
Since 2 N ; l 2 Z is an orthonormal system in L2([0; ] [0; 2]) we get from the Bessel inequality that 1 X
m ? X
m = 0 l = ?m
jcm;l
(Rf )j2
= ?2
Z 2 Z
0
Z 2 Z 1
0
?2
?1
kRf k2
Z 2 Z
0
0
sin
2X r+1
i=1
#
jr (#; ')j2 d# d'
0 2X r+1
i=1
@ i Rf (cos #; ') p (#) 2 d# d' i?1 @si !2
@ i Rf (s; ') ds d' @si
H (+1=2;0) (Ze)
kf k2 :
In the last step we used (A.1). The latter estimate together with (A.5) veri es (A.3) for = 2r +1=2. For arbitrary 0 one can use arguments from interpolation theory of Sobolev spaces, see, e.g., Lions and Magenes [7, Chap. 5.1]. Acknowledgment. We thank Rainer Dietz for many fruitful discussions. We are further indebted to Alfred K. Louis for his suggestions which helped to improve the presentation of our results. Our paper bene ts from improvements suggested by one of the referees.
APPROXIMATE INVERSE IN ACTION
21
REFERENCES [1] M. Abramowitz and I. Stegun, eds., Handbook of Mathematical Functions, Dover, New York, 1972. [2] J.-P. Aubin, Applied Functional Analysis, Pure & Applied Mathematics, John Wiley & Sons, Chichester, 1979. [3] M. E. Davison and F. A. Grunbaum, Tomographic reconstructions with arbitrary directions, Comm. Pure Appl. Math., 34 (1981), pp. 77{120. [4] R. Dietz, Die Approximative Inverse als Rekonstruktionsmethode in der Rontgen-Computertomographie (Approximate inverse as reconstruction method in computerized tomography), PhD thesis, Universitat des Saarlandes, Fachbereich Mathematik, 66041 Saarbrucken, Germany, 1999. [5] F. A. Grunbaum, Reconstruction with arbitrary directions: dimensions two and three, in Mathematical Aspects of Computerized Tomography, G. T. Hermann and F. Natterer, eds., Heidelberg, 1980, Springer-Verlag, pp. 112{126. [6] M. Hegland and R. S. Anderssen, A molli cation framework for improperly posed problems, Numer. Math., 78 (1998), pp. 549{575. [7] J. L. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Vol. 1, Springer-Verlag, New York, 1972. [8] G. G. Lorentz, Approximation of Functions, Holt, Rinehart and Winston, New York, 1966. [9] A. K. Louis, Orthogonal function series expansion and the null space of the Radon transform, SIAM J. Math. Anal., 15 (1984), pp. 429{440. , Incomplete data problems in X-ray computerized tomography I: singular value decompo[10] sition of the limited angle transform, Numer. Math., 48 (1986), pp. 251{262. [11] , Inverse und schlecht gestellte Probleme, Studienbucher Mathematik, B. G. Teubner, Stuttgart, Germany, 1989. English translation in preparation. [12] , Approximate inverse for linear and some nonlinear problems, Inverse Problems, 12 (1996), pp. 175{190. [13] A. K. Louis and P. Maa, A molli er method for linear operator equations of the rst kind, Inverse Problems, 6 (1990), pp. 427{440. [14] A. K. Louis and F. Natterer, Mathematical problems in computerized tomography, Proceedings IEEE, 71 (1983), pp. 379{389. [15] A. K. Louis and A. Rieder, Incomplete data problems in X-ray computerized tomography II: truncated projections and region-of-interest tomography, Numer. Math., 56 (1989), pp. 371{ 383. [16] A. K. Louis and T. Schuster, A novel lter design technique in 2D computerized tomography, Inverse Problems, 12 (1996), pp. 685{696. [17] P. Maa, The x-ray transform: singular value decomposition and resolution, Inverse Problems, 3 (1987), pp. 729{741. [18] , The interior Radon transform, SIAM J. Appl. Math., 52 (1992), pp. 710{724. [19] D. A. Murio, The Molli cation Method and the Numerical Solution of Ill-Posed Problems, Wiley, New York, 1993. [20] F. Natterer, The Mathematics of Computerized Tomography, Wiley, Chichester, 1986. [21] E. T. Quinto, Singular value decomposition and inversion methods for the exterior Radon transform and a spherical transform, J. Math. Anal. Appl., 95 (1985), pp. 437{448. [22] A. Rieder, R. Dietz, and T. Schuster, Approximate inverse meets local tomography. Zipped postscript le available under URL: www.num.uni-sb.de/paper/rieder/artikel.html, April 1999.
22
A. RIEDER AND TH. SCHUSTER
[23] L. L. Schumaker, Spline Functions: Basic Theory, Pure & Applied Mathematics, John Wiley & Sons, New York, 1981. [24] T. Schuster, Schnelle Rekonstruktion von Geschwindigkeitsfeldern und Theorie der Approximativen Inversen (Fast reconstruction of velocity elds and theory of approximate inverse). Work in progress, 1999. [25] J. Wloka, Partial Dierential Equations, Cambridge University Press, Cambridge, U. K., 1987.