Acquiring 3D Models of Non-Rigid Moving Objects ... - Semantic Scholar

14 downloads 341 Views 406KB Size Report
Osaka University Medical School. Suita, Osaka, 565, Japan ... z Department of Radiology, Osaka University Hospital. Abstract. This paper describes a method for ...
IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 19, No. 3, pp.253{259, March, 1997. Acquiring 3D Models of Non-Rigid Moving Objects from Time and Viewpoint Varying Image Sequences: A Step Toward Left Ventricle Recovery

Yoshinobu Satoy Masamitsu Moriyamay Masayuki Hanayamaz Hiroaki Naitoy Shinich Tamuray y Division of Functional Diagnostic Imaging

Biomedical Research Center Osaka University Medical School Suita, Osaka, 565, Japan Tel: +81-6-879-3564 Fax: +81-6-879-3569 E-mail: [email protected]

z Department of Radiology, Osaka University Hospital Abstract

This paper describes a method for the accurate recovery of time-varying 3D shapes with known cycle from images with di erent viewpoints as well as times, aiming at the recovery of the left ventricular shapes. Our recovery method is based on the integration of apparent contours from di erent viewpoints. We perform direct tting to a 4D closed surface model based on B-splines so as to deal with fragmented contours such as extracted from x-ray cineangiocardiograms. The method is quantitatively evaluated using synthesized and real image sequences. 3D shape recovery, non-rigid object, left ventricle, angiographic image analysis, multiple view integration, time-viewpoint space. Index terms:

1

1

Introduction

Acquisition of 3D shape models is one of the most important topics in the computer vision eld because of increasing demands for graphic displays in a virtual space as well as quantitative shape analysis in medical diagnosis and industrial inspection. For the purpose of acquiring 3D shape models directly from images, the use of occluding contours has received considerable attention. There have been two main approaches to 3D model acquisition using occluding contours: one approach integrates apparent contours from continuously varying viewpoints [1, 2, 3], and the other approach uses deformable models in which model constraints are incorporated such as symmetries and some other regularities [4, 5, 6]. The former has the advantage that accurate shape recovery is possible. Although several results have been reported for acquiring 3D models of rigid objects, it seems dicult to recover 3D models from fragmented contours. On the other hand, the latter has the advantage that 3D models can be recovered even using contours from one viewpoint or fragmented contours. However, recovered models were imposed to have rotational or mirror symmetries, and then they were not regarded as accurate but only as plausible. In this paper, we propose a method for acquiring accurate 3D shape models of non-rigid moving objects directly from images. The proposed method is based on the combination of the above two approaches. Especially, we aim at the recovery of the left ventricular (LV) shapes, one of the most important types of non-rigid objects [7]. We use xray cineangiocardiograms, that is, the 2D projections of LV, as an image data source. If we use ultra-fast CT or gated MRI synchronized with an electrocardiogram, we can directly obtain 3D cross-sectional information to recover 3D models [8, 9]. However, xray cineangiocardiography still has the advantages on temporal resolution, and spatial resolution along an axial direction of tomography as compared with ultra-fast CT and gated MRI. Furthermore, LV imaging by biplane or single-plane cineangiocardiography is a procedure commonly performed in cardiac catheterization, which is a routine examination regarded as the most reliable and accurate method for cardiac diagnosis by physicians. From the clinical aspect, there is a strong need for a more accurate LV recovery method by improving conventionally used cineangiography without introducing any special examination. One of such e orts is to use the density pro les [10] as well as the apparent contours of LV. The problem of this approach is the diculty of keeping the uniform density of contrast media. Our recovery method performs the integration of apparent contours from various viewpoints in order to acquire accurate 3D models for quantitative shape analysis in cardiac diagnosis. While LV images are taken from one or two xed viewing directions in conventional cineangiocardiography, we vary viewing directions continuously when LV images are taken, which can be easily realized using conventional devises. Although such an image acquisition method was proposed previously [11], the recovery method was quite insucient as concerns the use of spatiotemporal smoothness of cardiac motion and shape, the evaluation of matches between the extracted contours and the projections of recovered shapes, and the recovery from fragmented contours. In order to overcome those problems, we use a time-varying closed surface represented using B-spline functions having three variables (two as surface and one as time) to t directly and simultaneously to all contour data extracted from time and viewpoint varying images.

2

2

Obtaining Time and Viewpoint Varying Images

We introduce a time-viewpoint space to clarify the advantages of time and viewpoint varying images. A viewpoint can be represented as a point on a spherical surface, which can be parameterized using latitude  and longitude . In the case of LV recovery using a biplane xray system, we plan to vary LAO and RAO (left- and right-anterior-oblique view) angles from 0 to 90 and from 90 to 0 , respectively. When a patient body is aligned to the polar direction of the spherical coordinate system, the variations of LAO and RAO angles correspond to the variation of longitude . When the variation of only longitude  is considered, a time-viewpoint space can be de ned whose axes are time t and viewpoint . We assume that an object is observed by two cameras (a biplane xray system) whose viewing directions are orthogonal and given by (1 (t); 2 (t)), where j1(t) 0 2 (t)j = 2 . The sample points for image acquisition are taken along 1(t) and 2(t) in t- space. If xed viewpoints are assumed, sample points in t- space are represented as shown in Fig.1(a). If time-varying viewpoints are assumed, more uniform sampling can be realized in t- space as shown in Fig.1(b). If we can assume both shape and motion are smooth, the recovery of more accurate 3D shapes can be expected using the combination of uniform sampling in t- space and an appropriate spatiotemporal interpolation method as compared with the combination of images obtained by dense sampling along either t or  and strong constraints on object shape or motion, except for the case of rigid objects (that is, no motion) or rotationally symmetric shapes. In general, cardiac motion can be approximated as periodic motion. When object motion can be assumed to be periodic, time-varying viewpoints are more advantageous. If observable time is long enough compared with one cycle, we can obtain dense uniform image sampling in t- space. In LV imaging by cineangiocardiography, observable time is two or three seconds (that is, from three to ve cardiac cycles) during one injection of contrast medium. The sampling pattern shown in Fig.1(c) is realized using viewpoint variations given by (1 (t); 2 (t)) = ( 8T0 t; 8T0 t + 2 ) (where T0 is one cycle of periodic motion) and image acquisition by sampling interval T90 . Di erent sampling patterns can be realized by changing viewpoint variations (1 (t); 2 (t)) and the sampling interval (Fig.1(d)). 3 3.1

Recovery of Time-Varying 3D Shape Models Representation of Time-Varying 3D Shape Models

We represent time-varying 3D shapes using uniform B-spline functions. In order to parameterize a closed surface, we use spherical coordinates. We specify 3D position using latitude u and longitude v, and distance r from the origin to the direction speci ed by latitude u and longitude v . This means that recovered closed surfaces are limited to star-shaped surfaces with respect to the origin of the spherical coordinate system. Nevertheless, this class of surface is useful in many domains, especially in LV shape representation. Also, we assume that motion is periodic and its cycle is T0 . Therefore, time-varying 3D shapes are represented by iX 0 01 kX 0 01 0 01 jX Rijk Ui (u)Vj (v)Tk (t); (1) r(u; v; t) = i=03 j =0 k=0 where u 2 [0;  ], v 2 [0; 2], t 2 [0; T0 ], Rijk is a coecient, Ui (u) is the basis function of uniform cubic B-spline for non-periodic functions, and Vj (v) and Tk (t) are the basis functions for periodic functions. 3

3.2

Constraints for 3D Recovery from Contours

Given an image with known viewpoint (; ) and time t, we want to derive the constraints which relate 2D coordinates of contour points in an image to 3D position and normal on time-varying surface r(u; v; t) (see Fig.2(a)). For simplicity, we assume that the spherical coordinate system for representing a viewpoint is coincident with the one for representing a closed surface without loss of generality. Also, we assume orthography as an image projection model. A viewing direction can be given by v = (cos  cos ; cos  sin ; sin ). We de ne two orthogonal directions of image axes as i = (0 sin ; cos ; 0) and j = (0 sin  cos ; 0 sin  sin ; cos ). The optical ray corresponding to image coordinates I = (x; y ) is given by P + v, where P = xi + y j, and  is a scalar value. Here, we suppose that the optical ray passing through the origin of the spherical coordinate system is de ned as v. Now, we derive the constraints on a surface represented by r(u; v; t), given an image contour point I = (x; y ) at viewpoint v and time t. 3D position X(u; v; t) on its time-varying surface is obtained by the transformation from spherical coordinates to Cartesian coordinates. Because r(u; v; t) is the distance from the origin along 3D direction (cos v cos u; cos v sin u; sin v ) at time t, 3D position X(u; v; t) is given by X(u; v; t) = r(u; v; t)(cos v cos u; cos v sin u; sin v). Surface normal n(u; v; t) at X(u; v; t) is given by n(u; v; t) = N [@ X(u; v; t)=@u 2 @ X(u; v; t)=@v ], where N [x] = x=jxj. If there is an image contour point I = (x; y) with viewpoint v and time t which is a projection of an occluding contour of a surface, the constraints given by X(

u; v; t) = P + v;

(2)

u; v; t) 1 v = 0

(3)

and n(

must be satis ed at the corresponding surface coordinates (u; v ), where  = P = xi + y j . 3.3

X(

u; v; t) 1 v, and

Iterative Method for Time-Varying 3D Recovery

Based on the constraints given by Eqs. (2) and (3), we formulate a method for estimating r(u; v; t). Eqs. (2) and (3) are the basic constraints for 3D recovery from occluding contours, which are also described in [3]. In our problems, however, it is dicult to directly obtain r(u; v; t) satisfying these constraints because it is unknown what coordinates (u`; v` ) correspond to the optical ray P` + v` determined by given image coordinates (x` ; y` ). In order to obtain an approximate solution, we decompose the problem into two stages: First, we use Eq.(3) to nd the correspondence between surface coordinates (u` ; v` ) and each optical ray determined by image coordinates (x` ; y` ) of a given contour point. Second, we estimate r(u; v; t) by solving a linear equation system obtained from Eq.(2). We iterate these two stages to nally obtain a solution satisfying both Eqs. (2) and (3). We start the recovery algorithm by setting initial shape r(0) (u; v; t), and computing X(0) (u; v; t) and n(0) (u; v; t). (In the experiments, we used a sphere as initial shape r (0) (u; v; t).) Let m be an iteration count. We set m = 0 initially. During the rst stage, we nd tentative correspondence between surface coordinates (u` ; v` ; t` ) and an optical ray determined by image coordinates (x`; y`) at time t` (see Fig.2(b)). We nd this correspondence for every image contour point at 4

every viewpoint and time. Given X(m) (u; v; t), n(m) (u; v; t), and the optical ray P` + v` which corresponds to image contour point (x` ; y` ), we nd u` , v` , and ` satisfying the constraints ( m)

n

(u` ; v` ; t` ) 1 v` = 0;

(4)

and

+ ` v` = ` (cos v` cos u` ; cos v` sin u` ; sin v` ); (5) where ` is a scaler coecient. If we suppose that ` is given, the 3D position of P` + ` v` is determined. By representing this 3D position using spherical coordinates, u` , v` , and ` are uniquely determined using Eq.(5). We can check whether Eq.(4) is satis ed for determined u` and v` . In order to nd u` , v` , and ` satisfying the constraints, we continuously vary ` and search the value of ` which satis es Eq.(4). If there are multiple values of ` satisfying Eq.(4), we select ` where j ` 0 r(m) (u` ; v` ; t` )j is the minimum. During the second stage, based on u` , v` , and ` found at the rst stage, we estimate (m+1) r (u; v; t) by solving a set of linear equations derived from P`

r(m+1)(u` ; v` ; t` ) = ` :

(6)

(m+1) More precisely, combining with the smoothness constraint, we nd Rijk minimizing

E (m+1)

=

+

1

` X fr m 0

(

`0 `=1 ws 1

1

+1)

u ; v` ; t` ) 0 ` g2

( `

ZZZ

1

(

1 @r(m+1) (u; v; t) 2 @r(m+1) (u; v; t) 2 ) + (2 1 1 ) @u cos u @v

T0 2 @r(m+1) (u; v; t) 2 ) dudvdt +(T0 1 @t 2

where,

r

(m+1)

(u; v; t) =

(7)

X X X Rm

i0 01 j0 01 k0 01 i=03 j =0 k=0

(

+1)

ijk

Ui (u)Vj (v )Tk (t);

(8)

and ws is a weight parameter for the smoothness constraint. 1=`0 and 1=(2T0 2 ) are factors for obtaining average values from the summation and the integral, which normalizes the smoothness constraint and the data constraint based on Eq.(6).  , 2, and T0 (by which the partial derivatives are multiplied) are factors for the normalization of each partial derivative. The partial derivative with respect to v is multiplied by cos1 u because of the reduction of length along v with approaching the poles. (In the experiments, the normalized partial derivatives were estimated using discrete approximations such as (Rijk 0 Ri+1;jk )=(1=i0 ).) If error E (m+1) for newly estimated r(m+1) (u; v; t) is almost the same as previous error E (m) , then we stop the algorithm, else we set m = m + 1 and go back to the rst stage. (Empirically, four iterations were sucient for an appropriate weight value of the smoothness constraint.) 4

Experimental Results

We evaluated the method using synthesized and real image sequences. The method was implemented on a SPARC Station 20. The synthesized image sequences were generated using nonrotational-symmetric time-varying shapes to simulate a diseased LV whose contraction is not 5

uniform on heart wall. The images were generated under orthographic projection. The real image sequences were obtained by taking xray images of a balloon lled with contrast media using a biplane xray system (Siemens BICOR). We deformed the balloon shape by covering carton frames to make it asymmetric. We controlled its volume using a pump so that its time variation was periodic and similar to LV. Also, we took its CT images at several time phases and used the 3D CT models as the golden standard. In this case, perspective projection is more appropriate than orthography. However, we applied the recovery method assuming orthography 1 . We took the zero-crossing points of the r2 G with large gradient magnitude as image edges. Our method currently cannot discriminate \spurious" edges which should be regarded as outliers. So, we manually removed these \spurious" edges. We randomly selected 25% of all the extracted edges in each image and used them for the 3D recovery. The spherical coordinate system for surface representation was selected using the following method. The axis of elongation in the projected shape was manually speci ed in two images from orthogonal viewpoints at the systolic phase. We use the 3D line segment determined from the two axes as z -axis of the coordinate system. The origin was set at the center of the 3D line segment. The directions of x-axis and y-axis were set to the two orthogonal directions from which the two images had been taken. 4.1

Synthesized Image Sequence

Three image sequences were generated using sampling patterns in the time-viewpoint space as shown in Fig.1. One of these sampling patterns is shown in Fig.1(c). The other two sequences were generated using xed viewpoints (0; 2 ) and ( 4 ; 34 ) with time interval T360 . Fig.3(a) shows a part of the viewpoint-varying images. The size of each image was 2202 220 (pixels). Fig.3(b) shows the shaded displays of time-varying 3D shapes recovered from the viewpoint-varying sequence. We used i0 = 12, j0 = 12, and k0 = 12 as the number of knots of B-spline functions in Eq.(8), that is, the grid was 12212212. Fig.4(a) shows the convergence property for di erent values of smoothness constraint weight ws . Fig.4(b) shows the time variations of error in the recovered shapes when ws = 0:5 2 1003 . We used the volume of the di erences between the true shape and the recovered shape as a measure of error. We further divided the volume of di erences by the true volume at each time phase to normalize the error. It should be noticed that the volume of di erences is not the di erence between the true volume and the estimated volume. That is, we did not use the di erence of volumes, but used the di erence of shapes. In Fig.4(b), the normalized volume of di erences in the results recovered from the viewpoint- xed images highly depended on the selection of two viewpoints. We synthesized the time-varying shapes by deforming a roughly rotational symmetric shape using time-varying scale functions along three orthogonal directions. Because two orthogonal viewpoints happened to be close to two of these three orthogonal directions when (1 (t); 2 (t)) = (0; 2 ), the recovered shapes were relatively accurate. Nevertheless, the accuracy of the result recovered from the viewpoint-varying images was considerably higher through one cycle. For more detailed and comprehensive analysis, we evaluated how the resulted shape was affected by the variations in the sampling pattern of time-viewpoint space, the number of knots of B-splines, and the smoothness constraint weight. Figure 5 shows the results R of theseR evaluations. In Fig.5, we used the normalized total volume of di erences given by 0T0 e(t)dt= 0T0 v (t)dt as a 1 The

source to image intensi er distance (SID) is usually about 1 meter (= 100 cm), and the radius of the image intensi er is 5 inches, in which most part of the left ventricle is imaged within 4 inch ( 10 cm) radius. The maximum angle between the ray direction and the direction orthogonal to the image intensi er plane is expected 10  6 . Therefore, orthography can be a reasonable approximation of the xray projection processes. to be arctan 100

6

measure of error, where e(t) is the volume of di erence at time t, and v(t) is the volume of true shape at time t. In Fig.5(a), the velocity of viewpoint variation and the time interval of image acquisition were varied, by which di erent sampling patterns can be realized, while the smoothness constraint weight and the number of knots were xed to 0:5 2 1003 and 12 2 12 2 12, respectively. The velocity of viewpoint variation is in inverse proportion to the number of the sampling points along -axis in t- space, which is also regarded as the number of observable cardiac cycles during the viewpoint variation of 90 degrees. The time interval of image acquisition is proportional to the number of the sampling points along t-axis. The number of sampling points in t- space is related to the amount of xray dose. The number of observable cardiac cycles is related to the amount of contrast media. Thus, Fig.5(a) can be regarded as showing the trade-o between the accuracy of 3D recovery and the invasion for patients. Based on the results shown in Fig.5(a), the time interval 218T0 and the velocity of viewpoint variation 8T0 (using which four cardiac cycles are observable during 90 degree's rotation) could be regarded as one of good compromises. In Fig.5(b), the number of knots of B-splines and the smoothness constraint weight were varied while the velocity of viewpoint variation and the time interval of image acquisition were xed to 8T0 and 218T0 , respectively. This evaluation can be regarded as showing the trade-o between the accuracy and stability of the method. The accuracy of 3D recovery was gradually improved as the smoothness constraint weight became smaller, while the method became rapidly unstable for smaller weight than critical values depending on the number of knots due to its non-linear and underconstrained nature. The convergence became unstable for the small weight as shown in Fig.4(a). However, it was shown that the method is well-behaved for the wide range of the weight values of the smoothness constraint. 4.2

Real Image Sequence Using Balloon Phantom

One viewpoint-varying and two viewpoint- xed sequences of real images were obtained using a biplane xray system. The viewpoint-varying sequence was generated using viewpoints variations (1 (t); 2 (t)) = ( 5T0 t; 5T0 t + 2 ), with sampling interval T90 . The viewpoint- xed sequence were generated using viewpoints (0; 2 ) and ( 4 ; 34 ) with time interval T360 . The grid of B-splines was 12 2 12 2 12, and the smoothness constraint weight was ws = 0:5 2 1003 . Fig.6(a) shows a part of the viewpoint-varying images. Fig.6(b) and Fig.6(c) show the shaded displays of the time-varying 3D shapes recovered from the viewpoint-varying sequence and the CT models at the same time phases. Table 1 shows the normalized volume of di erences in the 3D shapes recovered from viewpoint-varying and viewpoint- xed sequences. We regarded the 3D CT models as true shapes. The normalized volume of di erences was around 10% at each time phase for the viewpoint-varying sequences, but it was between 20% and 30 % for the viewpoint- xed sequences. In this experiment, the balloon was deformed so that accurate recovery was dicult from any two orthogonal views, which is usual cases in a diseased LV. Thus, the advantage of the viewpoint-varying images was clearer in Table 1. Although we used orthography as an image projection model, the results of error evaluation were regarded to be acceptable for the results of the viewpoint-varying sequences. 5

Discussion

For the representation of LV shape, spherical harmonics has been considered to be suitable [12, 13]. However, a recent work has shown that tting high order degree harmonics is unstable in the blank 7

regions where no data is present because the linear system is nearly underdetermined [13]. Also in our experiments, it was shown that 3D recovery was unstable for small weight of the smoothness constraint as shown in Fig.5(b). As an alternative representation which stabilizes the recovery results, basis functions with small support regions such as B-splines can be used by combining with smoothness constraint. Our representation belongs to this class. Zhao and Mohr used the epipolar parameterization and B-splines for rigid 3D surface recovery [3]. The epipolar parameterization [14] is uniquely determined based on viewpoints and arc length along apparent contours. Its straightforward extension to time-varying surface recovery seems inappropriate, but a similar method can be applicable by making the modi cations as described below. If we parameterize time-varying surface using time t, viewpoint , and polar angle v of polar coordinates of contour points in each image, the correspondence between each contour point and (; v; t) is uniquely determined when all the images have star-shaped silhouettes. However, the resulted equations for 3D recovery are nonlinear [3]. While our method needs iterations to determine the correspondence between each contour point and (u; v; t), each iteration only requires to solve a linear equation system. The potential advantage of our method is that the variation of viewpoints is not necessary to depend on one parameter such as . Thus, our method can integrate images from irregular viewpoints, viewpoints rotating around multiple axes, and so on. 6

Conclusion

We have developed a method for the accurate recovery of time-varying 3D shapes with known cycle. The method was based on direct tting of a time-varying closed surface represented using B-spline functions to occluding contours obtained at di erent viewpoints as well as times. We have shown considerable improvements of accuracy of the shapes recovered from time and viewpoint varying images as compared with using time-varying but viewpoint- xed images. Our nal goal is the recovery of the LV shapes in clinical setting. We are now developing a semi-automated method for edge detection from digital subtraction images through minimum user interaction [15]. Also, the system should be extended so as to deal with perspective projection based on the value of SID and the radius of the image intensi er plane. Acknowledgment

The authors would like to thank Hiroki Kaihori and Takeshi Sakamoto for their assistance in the experiments.

8

References

[1] R.Vaillant and D.Faugeras: \Using extremal boundaries for 3-D object modeling", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.14, No.2, pp.157-173 (1992). [2] J.Y.Zheng: \Acquiring 3-D models from sequences of contours", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.16, No.2, pp.163-178 (1994). [3] C.Zhao and R.Mohr: \Relative 3D regularized B-spline surface reconstruction through image sequences", Proc. European Conference on Computer Vision '94, pp.417-426 (1994). [4] D.Terzopoulos, A.Witkin, and M.Kass: \Constraints on deformable models: recovering 3D shape and nonrigid motion", Arti cial Intelligence, Vol.36, No.1, pp.91-123 (1988). [5] A.Pentland, B.Horowitz, and Sclaro : \Non-rigid motion and structure from contour", Proc. IEEE Workshop on Visual Motion, pp.288-293 (1991). [6] X.Shen and D.Hogg: \Shape models from image sequences", Proc. European Conference on Computer Vision '94, pp.225-230 (1994). [7] T.S.Huang: \Modeling, analysis, and visualization of nonrigid object motion", Proc. International Conference on Pattern Recognition '90, pp.361-364 (1990). [8] A.M.Taratorin and S.Sideman: \Constrained detection of left ventricular boundaries from cine CT images of human hearts", IEEE Trans. on Medical Imaging, Vol.12, No.3, pp.521533 (1995). [9] A.Goshtasby and D.A.Turner: \Segmentation of cardiac cine MR images for extraction of right and left ventricular chambers", IEEE Trans. on Medical Imaging, Vol.14, No.1, pp.56-64 (1995). [10] G.P.M.Prause and D.G.W.Onnasch: \3-D reconstruction of the ventricular dynamic shape from the density pro les of biplane angiocardiographic image sequences", IEEE 1994 Proc. Computers in Cardiology, pp.193-196 (1994). [11] S.Eiho, M.Kuwahara, K.Shimura, M.Wada, M.Ohta, and T.Kozuka: \Reconstruction of the left ventricle from x-ray cineangicardiograms with a rotating arm", IEEE 1984 Proc. Computers in Cardiology, pp.63-67 (1984). [12] R.B.Schudy and D.Ballard: \Towards an anatomical model of heart motion as seen in 4-d cardiac ultrasound data", Proc. 6th Conference on Computer Applications in Radiology and Computer Aided Analysis of Radiological Images (1979). [13] A Matheny and D.B.Goldgof: \The use of three- and four-dimensional surface harmonics for rigid and nonrigid shape recovery and representation", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.17, No.10, pp.967-981 (1995). [14] R.Cipolla and A.Blake: \Surface shape from the deformation of apparent contours", Internationl Journal of Computer Vision, Vol.9, No.2, pp.83-112 (1992). 9

[15] H.Kaihori, M.Moriyama, Y.Sato, M.Hanayama, H.Naito, and S.Tamura: \Left ventricle recovery from viewpoint-varying xray cineangiocardiograms: Development of user interface for clinical application", Record of the 1995 Kansai-section joint convention of institutes of electrical engineering, Japan, Kyoto, G352 (1995). (in Japanese)

10

Table 1: Normalized volume of di erences of the recovered shapes of balloon phantom. Viewpoints (1 (t); 2 (t)) ( 10T0 t; 10T0 t + 2 ) (0; 2 ) ( 4 ; 34 ) π

T0=18 5T0=18

10.6% 21.3% 25.2%

θ

π

θ

π 2

π 2

0 π

11.7% 22.4% 25.0%

Time phase 9T0 =18 13T0 =18 15T0 =18 9.4% 10.7% 9.8% 22.5% 23.2% 22.6% 25.0% 24.5% 26.7%

(a) θ

t T0 0 π

π 2

π 2

0

t T0 0

(c)

(b)

T0

t

θ

(d)

T0

t

Camera motion Sample point

Figure 1: Sampling patterns in time-viewpoint space (t- space) assuming the use of two cameras with orthogonal viewpoints. (a) Sampling pattern for viewpoint xed images. (b) Sampling pattern for viewpoint varying images of shapes with non-periodic motion. The velocity of viewpoint variation should be fast enough in order to obtain relatively uniform sampling. (c) Sampling pattern for periodic motion with cycle T0 realized by viewpoint variations (1 (t); 2 (t)) = ( 8T0 t; 8T0 t + 2 ), and sampling interval T90 . (d) Sampling pattern realized by (1 (t); 2 (t)) = ( 4T0 t; 4T0 t + 2 ), and 2T90 .

11

Z

j

n(u,v,t)

P i

α (cos v cos u, cos v sin u, sin v)

Y

n(m)(u,v,t)

X(u,v,t) y r(u,v,t)

Z

Optical ray P + λv

x

X (m)(u,v,t)

v

Varying λ

P + λv

n(m)(u,v,t) .v = 0 v

Y

(m)

r (u,v,t)

Occluding contour viewed from v

Occluding contour viewed from v X

(a)

X

(b)

Figure 2: (a) Constraints relating 2D coordinates I = (x; y ) to 3D position X(u; v; t) and normal n(u,v,t). (b) Tentative correspondence between surface coordinates and the optical ray.

(a)

(b) Figure 3: (a)Time and viewpoint varying images of the synthesized image sequence. (b) Shaded displays of recovered 3D shapes. 12

Volume of differences (%)

50 30 ’smooth=1.0x10^-3’ ’smooth=0.5x10^-3’ ’smooth=0.1x10^-3’ ’smooth=0.05x10^-3’

Error (pixels)

25 20 15 10 5

"viewpoint_varying" "viewpoint_fixed_0,90" "viewpoint_fixed_45,135"

40 30 20 10 0

0 0

1

2

3 4 iteration

5

0

6

(a)

10 Time

P

20

(b)

0 Figure 4: (a) Convergence property. The square root of `10 ``=1 fr(m) (u`; v`; t`) 0 `g2 at each iteration is shown. (b) Time variations of the normalized volume of di erences of the recovered shapes.

12 10

’6x6x6’ ’8x8x8’ ’12x12x12’ ’16x16x16’

14

’2cycles’ ’3cycles’ ’4cycles’ ’5cycles’

Volume of differences (%)

Volume of differences (%)

14

8 6 4 2 0

12 10 8 6 4 2 0

1

2 4 8 Sampling interval (x T0/18)

0 0.5 1 Weight of smoothness constraint (x 10^-3)

(a)

(b)

Figure 5: Evaluation of the method under the variations of several conditions. (a) Variations in the velocity of viewpoint variation (=(4T0 ), =(6T0 ), =(8T0 ), =(10T0 )) and the time interval of image acquisition (T0 =18, 2T0 =18, 4T0 =18, 8T0 =18). The observable cardiac cycles during 90 degrees' rotation are two, three, four, and ve using the velocity of viewpoint variation of =(4T0 ), =(6T0 ), =(8T0), and =(10T0), respectively. (b) Variations in the numbers of knots of B-splines (6 2 6 2 6, 8 2 8 2 8, 12 2 12 2 12, 16 2 16 2 16) and the weight of smoothness constraint (for ws < 1:0 2 1003 ).

13

(a)

(b)

(c) Figure 6: (a) Real angiographic image sequence. (b) Shaded displays of recovered 3D shapes from the real image sequence. (c) 3D CT models.

14