Nov 29, 2012 - spatiotemporal domain by modeling the 4D velocity field ... can be performed with a standard free-hand system without the need for any ...
Multiview diffeomorphic registration: application to motion and strain estimation from 3D echocardiography Gemma Piellaa,b,, Mathieu De Craenea,b , Constantine Butakoffa,b , Vicente Grauc , Cheng Yaod , Shahrum Nedjati-Gilanid , Graeme P. Penneyd , Alejandro F. Frangia,b,e a Center
for Computational Imaging & Simulation Technologies in Biomedicine; Information & Communication Technologies Department, Universitat Pompeu Fabra, Barcelona, Spain. b Centro de Investigaci´ on Biom´ edica en Red en Bioingenier´ıa, Biomateriales y Nanomedicina (CIBER-BBN), Spain. c Department of Engineering Science, University of Oxford, United Kingdom. d Division of Image Sciences & Biomedical Engineering, King’s College London, London, United Kingdom. e Department of Mechanical Engineering, University of Sheffield, Sheffield, United Kingdom.
Abstract This paper presents a new registration framework for quantifying myocardial motion and strain from the combination of multiple 3D ultrasound (US) sequences. The originality of our approach lies in the estimation of the transformation directly from the input multiple views rather than from a single view or a reconstructed compounded sequence. This allows us to exploit all spatiotemporal information available in the input views avoiding occlusions and image fusion errors that could lead to some inconsistencies in the motion quantification result. We propose a multiview diffeomorphic registration strategy that enforces smoothness and consistency in the spatiotemporal domain by modeling the 4D velocity field continuously in space and time. This 4D continuous representation considers 3D US sequences as a whole, therefore allowing to robustly cope with variations in heart rate resulting in different number of images acquired per cardiac cycle for different views. This contributes to the robustness gained by solving for a single transformation from all input sequences. The similarity metric takes into account the physics of US images and uses a weighting scheme to balance the contribution of the different views. It includes a comparison both between consecutive images and between a reference and each of the following images. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement fields. Registration and strain accuracy were evaluated on synthetic 3D US sequences with known ground truth. Experiments were also conducted on multiview 3D datasets of 8 volunteers and 1 patient treated by cardiac resynchronization therapy. Strain curves obtained from our multiview approach were compared to the single-view case, as well as with other multiview approaches. For healthy cases, the inclusion of several views improved the consistency of the strain curves and reduced the number of segments where a non-physiological strain pattern was observed. For the patient, the improvement (pacing ON vs. OFF) in synchrony of regional strain correlated with clinician blind assessment and could be seen more clearly when using the multiview approach.
Keywords: Spatiotemporal registration, strain, 3D ultrasound, multiview, fusion, compounding.
1. Introduction Recent advances in the design of 3D ultrasound (US) probes have led to improved temporal resolution of this modality and the possibility of extending motion and deformation analysis from 2D to 3D. Despite significant algorithmic advances, such a quantitative analysis remains challenging due to multiple image quality artifacts (including image speckle and signal dropouts) and limited field of view. A common approach to address these problems is the combination of multiple US images obtained from different angles of incidence into a single compounded image (Rohling et al. (1997); Grau and Noble (2005); Wachinger et al. (2008); Szmigielski et al. (2010); Yao et al. (2011)). The rationale is to exploit the redundant and complementary Preprint submitted to Elsevier
November 29, 2012
(a) Input view 1
(b) Input view 2
(c) Fused image
Figure 1: Image fusion example. Images (a) and (b) show long and short axis slices from the apical and parasternal views, respectively, of a left ventricle, while image (c) shows an example of a corresponding fused image. Green arrows on the input images indicate missing information or artifacts. Red arrows on the fused image indicate areas that were well defined in only one of the input views, and therefore benefit from multiview image fusion.
information contained in the multiple input images to yield a single output image with improved quality and larger field of view. This combination process is commonly known as multiview fusion or spatial compounding because the US beams are swept spatially to achieve different viewpoints. The different views to be combined can come from a single acoustic window (e.g. either apical or parasternal) or from multiple ones (e.g. both apical and parasternal). As an alternative to spatial compounding, fusion can also be performed from images acquired within different acoustic frequencies (i.e., frequency compounding) or strain1 conditions (i.e., strain compounding). Multiview fusion relies on uncorrelated image artifacts and consistency of (desired) reflectivity patterns across the input images. Under these conditions, image artifacts and noise are reduced while structures and tissue boundaries are reinforced. Fig. 1 shows two input cardiac views from different acoustic windows (Fig. 1 (a) − (b)) affected with artifacts and missing anatomical information, and how the fusion of these images (Fig. 1(c)) can alleviate these problems. An overview of multiview fusion methods and their main features are summarized in Table 1. We distinguished the existing approaches by the basic components of a fusion algorithm: the type of compounding, the registration method and the combination or fusion rule. Besides these components, the table also reports the purpose to which the fusion is targeted, the quantification domain (if any), whether it uses single or multiple acoustic windows, and the validation procedure of the fusion algorithm. The type of compounding refers to the imaging conditions under which the input images are acquired. We differentiated between methods using spatial compounding, in which imaging is performed from different spatial positions, and methods using a frequency or strain approach, in which images are acquired in different frequency or strain 1 Here,
strain refers to externally applied forces such as the ones used in sonoelastography. It should not be confused with
intrinsic regional myocardial strain.
2
ranges. Most existing methods use spatial compounding, which is especially interesting for 3D US, since it can be performed with a standard free-hand system without the need for any specialised hardware. Moreover, spatial compounding can handle occlusion and missing anatomical information, as well as allowing for an increased field of view. The registration method is the process by which the inputs are aligned to a common coordinate system. Generally, this is achieved by finding a rigid transformation, which maximizes some similarity metric (see column ‘Metric’, under ‘Registration’, in Table 1). When fusing sequences, both temporal and spatial registration are needed since heart rate may vary over consecutive cardiac cycles. These variations tend to be nonlinear, yet most often temporal alignment is disregarded or performed using linear interpolation. Finally, the fusion rule deals with the actual merging of the (aligned) inputs. The combination algorithm usually operates at the voxel level, in the spatial domain, at specific points in time and without considering temporal information. Notably, the majority of the fusion methods are targeted towards improving contrast and compensating for dropouts (see column ‘Purpose’ in Table 1). There have been a lot of reports on how to improve US image quality by image fusion. Grau and Noble (2005) presented a method for combining 3D US images from apical and parasternal views using multiscale information about the structural content of the images and their orientation. They extracted this information from the images’ phases and combined them using different fusion rules. The images were manually registered. Grau et al. (2007) addressed the problem of how to automatically register the views using a similarity metric based on local orientation and phase differences. Yao et al. (2011) proposed a fusion rule that weighted image information based on local feature consistency between the input images. The method was designed to enhance contrast at boundaries while reducing noise in homogeneous regions. Rajpoot et al. (2011b) presented an acquisition protocol and fusion method of apical 3D US images using a wavelet-based fusion rule. Recently, some publications have addressed image fusion under the perspective of automatic quantitative analysis. Because the performances of such algorithms are highly dependent on image quality and tissuevolume definition, multiview fusion may be a valuable pre-processing step for improving their accuracy. Ye et al. (2002) combined first the input images and then used the resulting fused image to compute the ejection fraction and volume-time curves of the left ventricle (LV). Also Rajpoot et al. (2011a) performed LV segmentation and motion estimation from a fused image sequence. They showed that multiview fusion resulted in improved performance on segmentation and tracking. In all these cases, however, inputs are regarded as relatively independent images that are combined to produce another image with improved quality and higher information content. It will be upon this fused image that further quantitative analysis will be performed (see column ‘Quant. Domain’ in Table 1). Nonetheless, conventional fusion algorithms have the subsidiary effect that speckle gets blurred by the combination procedure. This blurring effect becomes especially critical when estimating tissue deformation. Moreover, other artifacts introduced by the fusion process such as ghosting and diverging ring-down artifacts 3
(see Heng and Widmer (2010) for details) can effectively hamper motion and deformation estimation. Also, reflection artifacts from pericardium may get amplified. Another major disadvantage of these conventional fusion approaches is that images from different inputs are independently combined at every cardiac phase without considering the entire sequence, and hence without exploiting the temporal information embedded in the 3D US sequences. These problems (loss of texture detail, fusion artifacts and lack of spatiotemporal consistency) could be overcome by estimating motion directly from the input images while considering the spatiotemporal consistency within the whole sequence. Grau et al. (2008) proposed a motion estimation algorithm that used images from apical and parasternal views. The algorithm was based on optical flow, using a variational formulation and similarity metric adapted to the multiplicative noise of US images as proposed in Cohen and Dinstein (2002). The results were validated on three volunteers and one simulated data set. They did not address, however, the temporal consistency problem and used the registration method of Grau et al. (2007) for the alignment of the input sequences. In this paper, we propose a multiview fusion framework to recover a more accurate in vivo quantification of 3D cardiac deformation over time from 3D US sequences acquired from multiple views of possibly different acoustic windows. A major feature of our approach is that we compute deformations through multiview registration and hence directly from the input views, using all available spatiotemporal information. This allows the exploitation of the inter-view and temporal coherence while taking advantage of the heteroscedastic nature of the speckle noise. By using the original input images, speckle information (which is an important feature for motion estimation and could be blurred out in the fusion process) remains consistent between temporal image frames. Another key characteristic is the use of a spatiotemporal diffeomorphic registration algorithm that models the velocities continuously in time and space (De Craene et al. (2012)). This allows, on the one hand, capturing the spatiotemporal variability of the underlying scene while maintaining consistency (i.e., preservation of spatial and temporal topology), and on the other hand, accounting for irregular temporal sampling. In this way, our approach does not require the inputs to have the same number of cardiac phases or to be scanned at the same instant within the cardiac cycle. A preliminary version of this work was presented in Piella et al. (2011). Results were shown for two volunteers and one simulated dataset. The current paper extends that work by presenting a more extensive description of the fusion framework, introducing a combined local and global (in a temporal sense) similarity metric based on the physics of US acquisition, using a feature-based weighting scheme, validating the results in a larger image study, and including a performance study of the proposed multiview registration algorithm. The remainder of this paper is organized as follows. Section 2 describes the algorithm giving emphasis on the transformation model, the weighting scheme and the similarity metric. Section 3 presents the application of our algorithm to strain quantification from 3D US sequences. We have applied the proposed methodology to synthetic 3D US sequences with known ground truth and to in vivo multiview 3D datasets of 8 4
volunteers and 1 patient treated with cardiac resynchronization therapy (CRT). Strain curves obtained from our multiview approach are compared to the single-view case, as well as with other multiview approaches (namely, other classical image fusion schemes and the algorithm of Grau et al. (2008)). Section 4 discusses the advantages and limitations of our approach in comparison to existing strategies and points out future directions. Finally, Section 5 gives some conclusions.
2. Methods Each of the input sequences gives a view of the imaged scene in a limited portion of space and with anisotropic image quality. The purpose of our multiview registration is to integrate this information for estimating from the multiple views the trajectory of any material point in the imaged scene. In our setting, this amounts to finding the spatiotemporal transformation that transports any point in a common coordinate system at the initial time to any subsequent continuous time in the cardiac cycle. Henceforth, we refer to the common coordinate system as the fusion space, and take the initial time to be t = 0. The motion in each view sequence is related to the motion in the fusion space: homologous points and trajectories in the different inputs should map to the same points and trajectories in the fusion space (Section 2.2 and Section 2.4). This is illustrated in Fig. 2. Continuous spatiotemporal trajectories are computed from the 4D velocity field parameterized by a 4D grid of control points with B-spline kernels (Section 2.3). We then formulate the multiview registration as the optimization of an US physics-based similarity metric matching intensities of the input views warped back from the fusion space. The matching is weighted across the views to account for different image quality across the field of view of each input sequence (Section 2.5). The multiview registration pipeline is summarized in Table 2. Finally, strain is computed as described in Section 2.6. 2.1. Notation By convention, superscript indexes will refer to the different views whereas subscript indexes will refer to time. We consider L single-view input sequences, each one representing a different 3D+t view of the l same dynamic cardiac scene. Each input sequence is composed of N l images, I0l , . . . , IN l −1 , l = 1, . . . , L,
every image Inl being defined on a spatial domain Ωl ⊂ IR3 and associated to a normalized time instant tln ∈ [0, 1], n = 0, . . . , N l − 1. Different sequences may have different numbers of images N l and may not be 0
aligned in time, thus tln and tln for l 6= l0 may not correspond to the same position within the cardiac cycle. Spatial coordinates in the fusion space are denoted by x ∈ Ω ⊂ IR3 and spatial coordinates in the space of each view by xl ∈ Ωl . Consistent with these notations, Inl (xl ) represents the intensity level of each view l at point xl ∈ Ωl and time tln , and It (x) is the intensity level of the real scene at x ∈ Ω and time t. We denote by C l the inter-sequence registration transformation mapping the fusion space to the space of each 5
Table 1: Overview of existing ultrasound fusion algorithms Registration Reference
Purpose
Quant.
Type
Metric
Domain Grau and
V
NA
Spatial
NA
Validation
Temporal
Fusion Multi-
alignment
rule
window
LI
PB
Yes
L
2
Noble (2005) Grau et al.
R
NA
Spatial
PB
LI
NA
Yes
2
T
OI
Spatial
UB
NA
EW
Yes
2
Data
Metric
2D synthetic
CNR
2 volunteers
Visual IQ
9 patients
Visual IQ
(2007) Grau et al. (2008) Szmigielski
V
NA
Spatial
CC
LI
EW
No
2–6
et al. (2010)
Yao et al.
V
NA
Spatial
PB
NA
FB
Yes
5–10
(2011)
2D synthetic
MSE
3 volunteers
Dice
Phantom
CNR
16 volunteers
Visual IQ, volu-
+16 patients
metric-based
Phantom
SNR, contrast
10 volunteers
Visual IQ
+2 patients Rajpoot
S&T
FI
Spatial
CC
LI
WB
No
3–8
34 subjects
et al. (2011a)
Visual tion,
classificavolumetric-
based, Dice, MSD Rajpoot
V
NA
Spatial
CC
None
WB
No
3–6
36 volunteers
et al. (2011b) Proposed
SNR, CNR, edgebased, FOV
T&D
OI
Spatial
PB
TDFFD
FB
Yes
2–20
approach
3D synthetic
MSE, dispersion
8 volunteers
dispersion
+1 patient Others Cincotti
V
NA
Frequency
NA
NA
EW
No
3
Phantom
CNR
V
NA
Strain
NA
NA
EW
No
4
Phantom
CNR
V
NA
Strain
NA
NA
DB
No
2–16
Phantom
edge-based, CNR
3 patients
Visual IQ
et al. (2001) Li and Chen (2002) Yang et al. (2009) NA: not applicable or not specified Purpose: fusion application (V: visualization, T: tracking, R: registration, S: segmentation, D: deformation estimation) Quant. Domain: quantification domain (OI: original images (input views), FI: fused image).
For algorithms whose purpose is
visualization, this column does not apply. However, any further quantification will be expected to be on the fused image Type: type of compounding Metric: similarity metric in the spatial inter-view alignment (UB: US physics based, PB: phased-based, CC: cross-correlation) Temporal alignment: temporal inter-view alignment (LI: linear interpolation, TDFFD: time diffeomorphic free form deformation (De Craene et al. (2012))) Fusion rule: combination method for compounding the inputs (PB: phased-based, EW: equal weighting, WB: wavelet-based, FB: feature-based, DB: diffusion-based) Multi-window: indication of whether images to fuse come from multiple acoustic windows L: number of input views to fuse Data: data used in the validation Metric: performance metrics used for the fusion algorithm evaluation (CNR: contrast-to-noise ratio, IQ: image quality, MSE: mean square error, SNR: signal-to-noise ratio, FOV: field of view, MSD: mean surface distance)
6
Figure 2: Multiview registration scheme. A material point in the fusion space can be related to homologous points in the input views. Hence, the evolution in time of homologous points describes the same trajectory in the fusion space.
input view, and refer to it as the calibration transformation so as not to confuse it with the intra-sequence registration transformations. The (intra-sequence) transformation in the fusion space is defined as ϕ and maps a point x ∈ Ω at t = 0 to its position ϕ(x, t) at time t ∈ [0, 1]. Similarly, ϕl (xl , n) denotes the transport of a point xl at time tl0 = 0 to time tln ∈ [0, 1]. We also use the short-hand notation ϕt→t0 (and ϕln→n0 ) to denote the transport of a point at time t (respectively tln ) to time t0 (respectively tln0 ). Note that although transformations ϕ and ϕl are defined for any time t ∈ [0, 1], time indexes in notation ϕln→n0 refer to time points tln at which the images of view l have been acquired. Main notations used in this paper are summarized in Appendix C. 2.2. Calibration and ECG synchronization We use piecewise linear ECG synchronization as in Duchateau et al. (2011) to represent the input sequences in a common reference time scale. The calibration transformations C l that relate the fusion space with each input view space are obtained as in Yao et al. (2011) using a combination of optical tracking and group wise registration (Wachinger et al. (2008)) with phase-based similarity measure (Grau et al. (2007)). During each acquisition, subjects were asked to hold their breath at exhale and remain as still as possible 7
Table 2: Multiview registration pipeline Algorithm steps 1.
Choose the fusion space
2.
Compute calibration transforms C l and ECG-based synchronization of the views (cf. Section 2.2)
3.
Compute weights wl for balancing the contribution of the views
4.
Using transformation model (3) and the links between fusion and input view spaces (4): (a) Initialize ϕ by optimizing the weighted global metric MG (cf. Section 2.5) (b) Compute ϕ by optimizing the weighted combined metric MG + ML (cf. Section 2.5)
to minimize respiration artifacts. Since the US probe is held stationary over the entire acquisition of each input sequence, the calibration transformations C l are assumed to be constant over the cardiac cycle. 2.3. Transformation model We use a spatiotemporal diffeomorphic transformation model where the velocity field is represented as a continuous and differentiable 4D vector field using B-splines (De Craene et al. (2012)). The diffeomorphic mapping ϕ : Ω × [0, 1] → Ω, Ω ⊂ IR3 , is related to the time-varying velocity field v : Ω × [0, 1] → IR3 by Z t ϕ(x, t) = x + v(ϕ(x, τ ), τ )dτ . (1) 0
Hence, each point x ∈ Ω evolves along the trajectory ϕ(x, t), t ∈ [0, 1], its velocity at time τ being by definition v(ϕ(x, τ ), τ ). In our model, the transformation parameters are the control point values in the B-spline representation of the velocity field. The B-spline velocity coefficients assigned to all control points are concatenated in a vector of parameters p, the velocity being thus expressed as v(x, t; p) =
X i,j,k,l
β
x − x y − y z − z t − t i j k l β β β pi,j,k,l , ∆x ∆y ∆z ∆t
(2)
where x = (x, y, z), β(·) is a 1D cubic B-spline kernel, {xi , yj , zk , tl } define a regular grid of 4D control points, and ∆x , ∆y , ∆z , ∆t are the spacings between control points in each dimension. To numerically compute ϕ in (1), the continuous time interval is sampled at intermediary time points tk and the integral is replaced by a summation: ϕ(x, tm ; p) = x +
m−1 X
v (ϕ(x, tk ; p), tk ) ∆tk ,
(3)
k=0
where ∆tk = tk+1 − tk . To ensure invertibility, the time increment ∆tk is adjusted, at each k, to guarantee that the spatial Jacobian of ϕ is positive definite everywhere. We refer to our previous work in De Craene et al. (2012) for a more detailed description of the velocity field computation and how it generates trajectories in the 4D image domain. 8
2.4. Transporting trajectories from the input views to the fusion space Since all input views are describing the same cardiac scene, homologous points correspond to the same material point in the fusion space and map the same trajectories. Consider a point xl in the space of view l whose trajectory is described by ϕl0→n (xl ; p). This trajectory can also be tracked in the fusion space by first warping the point xl to the fusion space (by the inverse of C l ), then transporting it to tln (by ϕ0→tln ), and finally bringing this transformed point back to the space of view l (by C l ). This is illustrated in Fig. 3. Thus, each ϕl0→n is related to ϕ0→tln through: ϕl0→n = C l ◦ ϕ0→tln ◦ (C l )−1 ,
(4)
where C l is the calibration transformation obtained as described in Section 2.2 and ϕ0→tln is the transformation in the fusion space modeled as in (3).
IO0l
input view space
ϕl0→n
Cl
I0
/ Il n O Cl
fusion space
ϕ0→tl
/ Itl n
n
Figure 3:
Link between transformations in each input view space Ωl and transformations in the fusion space Ω. Note that
ϕl0→n ◦ C l = C l ◦ ϕ0→tl . n
2.5. Similarity metric The transformation ϕ in the fusion space is obtained through the optimization of a similarity metric matching intensities of the input views warped back from the fusion space. Specific similarity metrics need to be designed for capturing the similarity of the images across and within the input views. For that purpose, we incorporate statistical features of US images into the metric while accounting for different degrees of confidence associated to the different inputs. Hence, for each view l and location x in the fusion space at t = 0, we assign a value wl (x), which weighs the contribution of the view to the similarity metric. Coherent US speckle can be a very useful feature for motion tracking (Cohen and Dinstein (2002); Yue et al. (2009)). However, over longer time scales incoherent speckle patterns introduce random texture variations which can adversely effect similarity measures. We therefore propose using a combination of two similarity measures: the first operates on a local temporal level and only compares images which are acquired at adjacent time points; the second operates on a global temporal level and compares images over the entire cardiac cycle. The local metric is able to make use of speckle tracking information, whereas the 9
global metric, as all the images can be compared, can ensure that small errors in registration results do not add up causing significant errors over the entire cardiac cycle. Our proposed similarity metric for capturing the optimal set of B-spline velocity coefficients p in (2) is therefore M (p) = ML (p) + λMG (p) ,
(5)
where ML , MG are, respectively, the local and global similarity metrics, and λ is a constant term that maps MG into a comparable range from that of ML . Further description of these metrics is given in the following subsections. As illustrated in Fig. 4, the registration based on combining similarity metrics ML and MG is ˆ of the transformation resulting when considering only the global metric initialized by a first estimation ϕ MG . For the metric computation, we consider a randomly drawn set of samples {xj,0 ∈ Ω0 , j = 1, . . . , J}, where Ω0 is the subdomain of Ω (the fusion space) at time t = 0, enclosing the region of interest (e.g., the ˆ and can then be mapped to every input LV domain). These samples are propagated in time by using ϕ view space by the calibration transformation C l : l ˆ0→tln (xj,0 ) , yj,n = Cl ◦ ϕ
(6)
for all j = 1, . . . , J, l = 1, . . . , L and n = 0, . . . , N l − 1. 2.5.1. Local similarity metric ML The local similarity metric ML is intended to capture the local intensity variation due to speckle noise. To this end, we use the Cohen and Dinstein (2002) similarity metric, which takes into account that speckle patterns can be represented by a multiplicative Rayleigh distributed noise, and that B-mode imaging is logcompressed. A brief overview of this metric is given in Appendix A. Adapting it to our multiview framework, the proposed local similarity metric takes the form: l
ML (p) =
L NX −1 J X X
l l wl (xj,0 ) ln exp 2∆ln (yj,n−1 ; p) + 1 − ∆ln (yj,n−1 ; p) ,
(7)
j=1 l=1 n=1 l l ˆ0→tln (xj,0 ) and ∆ln (yj,n−1 where wl (xj,0 ) is the weight for view l assigned to xj,0 ∈ Ω0 , yj,n = Cl ◦ ϕ ; p) is
the intensity difference between homologous points in view l at consecutive time points tln−1 and tln , i.e., l l l l ∆ln (yj,n−1 ; p) = In−1 (yj,n−1 ) − Inl ϕln−1→n (yj,n−1 ; p) .
(8)
Thus, such an US-specific metric considers the correlated speckle noise between consecutive images and has inherent robustness to speckle decorrelation, which makes it a suitable metric for fully-developed speckle noise.
10
+
MG
MG ML Figure 4: Metric computation and transformation initialization. First, an initial registration based on global metric MG is ˆ is then used to initialize the registration based on combining similarity metrics MG performed. The resulting transformation ϕ and ML . Global metric MG compares a reference image with each of the following images while local metric ML compares consecutive images.
2.5.2. Global similarity metric MG On the other hand, the global similarity metric MG takes into account the similarities between the global intensity distribution of the homogeneous regions (i.e., that tissue and blood pool intensities are globally preserved over the cardiac cycle). For simplicity, we use a weighted mean square error and, for all views, we compare each image in the sequence to the first one: l
MG (p) =
−1 L NX J X X
2 l wl (xj,0 ) ∆l0,n (yj,0 ; p) ,
(9)
j=1 l=1 n=1 l l where yj,0 = C l ◦ xj,0 and ∆l0,n (yj,0 ; p) is the intensity difference between homologous points in view l at
t = 0 and tln , i.e., l l l ∆l0,n (yj,0 ; p) = I0l (yj,0 ) − Inl ϕl0→n (yj,0 ; p) .
(10)
Such a frame-to-reference similarity metric has robustness against temporal drift (De Craene et al. (2011)). 2.5.3. Weighting and optimization The intensity levels are only compared within views, avoiding potential problems with intensity normalization while allowing to track the speckle. We consider the weighting mappings wl to be the same for both ML and MG . In this paper, we used three different choices of weighting schemes to balance the contributions of the different views to the image similarity metric. Namely, we used averaging, maximum selection rule, and the feature-consistency based approach described in Yao et al. (2011). The weighting is across all views and it is defined in the fusion space for the initial time t = 0, i.e., wl : Ω0 → IR. The domain Ω0 is obtained by segmenting the LV at t = 0 in the fusion space as described in Section 2.6. For an optimizer we used the limited memory Broyden-Fletcher-Goldfarb-Shannon minimization with simple-bounds (L-BFGS-B) (Byrd et al. (1995)), which is particularly suited to large-scale optimization 11
problems. The computation details of the total derivative of metric M in (5) are given in Appendix B. 2.6. Myocardial strain estimation The strain is estimated from the spatial derivative of the resulting spatiotemporal transformation ϕ. If ∇ϕ(x, t) is the spatial gradient of ϕ(x, t) (i.e., deformation tensor), the strain tensor is obtained by (x, t) =
1 (∇ϕ(x, t)T ∇ϕ(x, t) − I) , 2
(11)
where superindex T denotes transposition and I is the identity matrix. The discrete gradient ∇ϕ(x, tm ) is computed from (3) as explained in De Craene et al. (2012). The strain tensor can further be projected onto a local cardiac coordinate system to compute the deformation in the radial, circumferential and longitudinal directions. These directions are defined on a mesh obtained from segmenting the LV at t = 0 in the fusion space. The segmentation algorithm is based on active shape models (Butakoff et al. (2007)) and the left ventricular image in the fusion space is obtained by combination of the input images at t = 0 using the same weighting scheme as for the similarity metric. This segmentation is used also to define the region of interest around the LV (Ω0 domain in Section 2.5). As this region of interest is larger than the segmented LV, the impact of incorrect segmentation on tracking is expected to be small. Moreover, the segmentation is done on the fused image to minimize the segmentation errors. The strain data was averaged over 17 regions in accordance with the standard division of the LV proposed by the American Heart Association (AHA) (Cerqueira et al. (2002)). Since strain is computed in a Lagrangian space of coordinates, both local coordinate system and AHA segments only need to be defined at t = 0 in the fusion space, i.e., Ω0 . Note that the fused image is only computed at the initial time point t = 0 since it is only needed to segment the LV for the purposes of defining the region of interest Ω0 and the local coordinate system. 2.7. Implementation details We applied a multigrid approach with two grid refinements to capture large displacements as well as fine details. The algorithm was implemented in C++ using the open-source libraries ITK and VTK. All experiments were run on a Linux server with double quad-core Intel Xeon (2.66 GHz CPU, 16 GB RAM). For two input views with 150 × 150 × 150 voxels and a temporal resolution of 20 frames for a cardiac cycle, the total computation time is about 3 hours. However, the algorithm allows easy parallelization (each view and sample in the fusion space can be processed independently for computing its contribution to the metric and its derivative). Therefore, implementation on a GPU architecture or using multithreading is expected to significantly reduce computation time. 3. Results The proposed multiview registration algorithm was applied to synthetic 3D US sequences with known ground truth to evaluate its accuracy, both in terms of displacement and strain, and was then applied to 12
in vivo multiview 3D US sequences of 8 volunteers and 1 CRT candidate. Strain curves obtained from our multiview approach were compared to the ones obtained from a single view (one of the inputs and a standard fused sequence) and to the multiview approach of Grau et al. (2008). 3.1. Strain accuracy in simulated US data We first generated a synthetic 3D US sequence of a full-view normal LV from which we will determine the optimal settings of the registration algorithm (Section 3.1.1) and then construct the multiview synthetic dataset (Section 3.1.2). Similarly to Elen et al. (2008), LV geometry was represented by a thick-walled ellipsoid with end-diastolic dimensions within normal limits, and LV deformation was modeled by a simplified kinematic model with a 60% ejection fraction and 15◦ torsion over a cardiac cycle. This model was used to generate ground truth values for both motion (displacement) and deformation (strain). The simulated sequence consisted of 300×300×300 isotropic voxels with a voxel size of 0.36 mm and a temporal resolution of 20 frames for a cardiac cycle. The initial frame corresponded to end diastole. The LV was represented by 0.7 point scatterers per voxel (within the myocardium) during 20 frames, generating a whole cardiac cycle. The echogeneicity of the scatterers in the first frame was sampled from a normal distribution but maintained constant over the following frames to provide for correlated noise between the frames. Additionally, an uncorrelated noise was added to every frame independently. It consisted of uniformly distributed scatterers (with a density of 0.7 scatterers per voxel within the field of view) with echogeneicity sampled from the normal distribution. Its standard deviation was five times smaller than that of the myocardial echogeneicity, resulting in a signal-to-noise ratio (SNR) of 16 dB.
(a)
(b)
Figure 5: Long-axis slices from the end-diastolic (a) and end-systolic (b) frames of the full-view simulated sequence.
13
3.1.1. Choice of registration parameters We used this (single) full-view synthetic sequence to optimize the registration parameters, which will be then used in the experiments with multiview datasets, and to test the suitability of using the combined similarity metric M over ML and MG . The influence of several parameters of the transformation model (Section 2.3) on the displacement and strain accuracy was already studied in detail in De Craene et al. (2012). In this previous work, the tracking algorithm was also optimized from synthetic images, and it was shown that applying it on single view sequences acquired from healthy subjects resulted in strain curves that were in accordance with physiological ranges and patterns reported in the literature. For our synthetic sequence, the optimal B-spline grid resolution was determined by optimizing the error between the true and the recovered displacements, which occurred when using one control point per frame in the temporal direction, 5 control points in the short-axis direction and 3 control points in the long-axis direction. The factor λ in (5) is computed as the ratio of ML to MG metric values at the first iteration of the optimization process. The benefits of using the combined similarity metric M instead of just ML or MG are illustrated in Fig. 6. The curves show the median magnitude of the difference between the ground truth displacement field and the ones obtained when using the metrics M , ML and MG . The error (in millimeters) was computed over the entire myocardium. Vertical bars indicate the dispersion of the error (as measured by the interquartile range). The horizontal axis is the normalized cardiac time (from 0 to 1, going from the beginning of systole until late diastole). For all three metrics, the error increases when estimating larger displacements. The results indicate that the estimation error for the proposed metric M is a good compromise between low error values over the contraction period (systole) and low temporal drifts in the last phases. 3.1.2. Experiments with a multiview synthetic dataset We present displacement and strain quantification results for a multiview dataset (Fig. 7) consisting of two views generated by reducing the field of view (from 70◦ to 50◦ ) so that it does not cover the entire LV (as can happen when imaging dilated hearts), and by rotation and translation of the original points (simulating the motion of the probe from the apex towards the lateral wall and towards the interventricular septum). To evaluate the ability of our algorithm to deal with missing information in the input views, the displacement and deformation of the ground truth were compared with the displacements and deformations found after the multiview registration of the multiview dataset. For these experiments, the maximum selection rule was chosen as a weighting scheme (i.e., for each voxel, only the view with maximum intensity contributes to the metric). For comparison purposes, we also included the results obtained when using the single full view (Fig. 5) and one of the single partial views (Fig. 7(a)). Thus, we present motion and deformation results from the algorithm when using a full view, one single partial view and two partial views. Fig. 8 plots the median magnitude of the difference between the ground truth displacement field and the ones given by two algorithms: the algorithm in Grau et al. (2008) (Fig. 8(a)) and our multiview registration 14
Error magnitude (mm)
Median displacement error 4
M M alone
3
M alone
L
G
2 1 0 0
0.5
1
Figure 6: Displacement error as measured by different similarity metrics. The plot shows median value of the error magnitude on the displacement field for the full-view sequence when using: the proposed combined metric M (black), the local metric ML (blue dashed) and the global metric MG (red dashed). The error is measured in millimeters over the entire myocardium. Vertical bars indicate dispersion of the error values as measured by the interquartile range. The horizontal axis is the normalized cardiac time (from 0 to 1).
algorithm (Fig. 8(b)). The latter shows that there is no substantial difference between the multiview result (red dashed curve) and the full view (black curve). In contrast, the error obtained with a single partial view (blue dashed curve) is slightly larger (maximal median error of 2.50 mm for the partial view, 2.01 mm for the multiview set, and 1.98 mm for the full view). The top row of Fig. 9 shows the ground truth strain curves. Vertical bars indicate the dispersion on the whole myocardium (as measured by the interquartile range). The rows below show the strain curves as recovered by the algorithm in Grau et al. (2008) and our multiview approach, respectively. The second and third rows correspond to the strain obtained from the full view. The fourth and fifth rows show the strain obtained from the partial view and the two last rows the strain obtained from the multiview dataset. One can see that with our algorithm, in all cases the global strain patterns resemble the ground truth but with increased dispersion. However, while dispersions obtained from the full view and the multiview are similar, the one obtained from the partial view is considerably larger in the longitudinal and circumferential directions. To further illustrate the greater strain dispersion when using a single view, Fig. 10 shows the color maps of longitudinal strain along time corresponding to the full view, the partial view and the multiview sequence, when using our multiview algorithm. A zero deformation corresponds to dark red color while the maximum peak value (end-systole) corresponds to dark blue. As a global trend, the strain increases during systole (shortening) and reduces during diastole (lengthening). However, one can see how in the partial 15
(a)
(b)
Figure 7: Multiview synthetic dataset. Long-axis slices from the end-diastolic frames of two views simulating different view angles (variation of ±15◦ in probe angle).
view (middle row) there are areas in which strain is less uniform and considerably differs from the full view. This was to be expected since the lack of information (here in regions of the apical and middle levels) makes motion tracking, and thereby strain quantification, less accurate. 3.2. Strain quantification in clinical data 3D+t US sequences with number of views varying from 3 to 20 were acquired from 8 healthy volunteers and from a CRT candidate using a iE33 US (Philips, Best, The Netherlands) system with a 3D X3-1 matrix array transducer. For the volunteer datasets the transducer was optically tracked using a Northen Digital Optotrak (NDI, Ontario, Canada) and calibrated using the method described in Ma et al. (2008). For the patient dataset initial image registrations were obtained by manually picking corresponding anatomical landmarks. This information was used to provide an initial estimate for the calibration transformation described in Section 2.2. Triggered by ECG gating, wide sector acquisitions were taken from parasternal and apical windows with the subject positioned in the left lateral decubitus position. Subjects were instructed to hold their breath at exhale and lie as still as possible during each acquisition. The angle and position of the probe was slightly changed for each window to vary the content of each dataset. The different configurations in terms of number of inputs and type of views are given in Table 3. Fig. 1(a)-(b) shows example slices for 2 views from one of the volunteers dataset. All sequences consisted of 224 × 208 × 201 voxels with an average voxel size of 0.8 × 0.8 × 0.7 mm3 , and 11 − 20 frames per cardiac cycle depending on the heart rate. Different views from the same subject did not always have the same number of frames. Variations in the 16
Median displacement error Error magnitude (mm)
Error magnitude (mm)
Median displacement error full view partial view multiview
6 4 2 0 0
0.5
1
(a)
full view partial view multiview
6 4 2 0 0
0.5
1
(b)
Figure 8: Displacement error as measured by (a) Grau’s algorithm and (b) our algorithm. The plots show median value of the error magnitude on the displacement field for the full view (black), one single partial view (blue dashed) and the two partial views (red dashed). The error is measured in millimeters over the entire myocardium. Vertical bars indicate dispersion of the error values as measured by the interquartile range. The horizontal axis is the normalized cardiac time (from 0 to 1).
heart rate within a same subject were up to 14% (4% in average). The patient underwent CRT following current clinical guidelines and data was acquired before (OFF) and after (ON) CRT device activation. A blinded clinician was asked to look at the data sequences and correctly distinguished the OFF and ON sequences from one another. The coordinate space of one of the sequences acquired near the apical position was used as the fusion space. This sequence, hereafter referred to as the reference view, was chosen for its better quality in terms of completeness of anatomical information, compared to other single-view sequences. The registration parameters were set according to the optimal values found for the synthetic dataset from Section 3.1.1. All clinical images were acquired at St. Thomas’ Hospital, London. Image quality was representative of the challenges inherent to conventional 3D US scans as acquired in clinical routine in terms of spatiotemporal resolution, SNR and field of view. 3.2.1. Strain quantification in healthy volunteers There are no standard quantitative validation measures for assessing fusion algorithms (see column ‘Validation’ in Table 1). Rajpoot et al. (2011a), when evaluating the tracking performance obtained from the fused sequence, used one of the apical single-view sequences as a reference. They considered its endsystolic endocardial surface obtained by manual segmentation as a ground truth, and compared the tracking when using the fused sequence and when using the (reference) single-view one. Quantitative measures such as Dice coefficients, mean surface distance and end-systolic volume were employed to assess the agreement 17
Radial
Longitudinal
Ground Truth
Ground Truth
0.5 0 0
0.2
0.4
0.6
0.8
1
0
−0.2
−0.2
−0.4 0
1
0.2
0.4
0.6
0.8
1
1
0.2
0.4
0.6
0.8
1
−0.2
−0.2
−0.4 0
1
0.2
0.4
0.6
0.8
0.4
0.6
0.8
1
−0.4 0
0
0
−0.2
−0.2
−0.4 0
0.2
0.4
0.6
0.8
1
−0.4 0
0
−0.2
−0.2 0.2
0.4
0.6
0.8
1
−0.4 0 0
−0.2
−0.2 0.2
0.4
0.6
0.8
1
−0.4 0
0
−0.2
−0.2 0.2
0.4
0.6
0.8
1
1
−0.4 0 0
−0.2
−0.2 0.2
0.4
0.6
0.8
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
2 views − our algorithm
0
−0.4 0
1
2 views − Grau’s algorithm
0
−0.4 0
0.8
1 view − our algorithm
0
−0.4 0
0.6
1 view − Grau’s algorithm
0
−0.4 0
0.4
Full view − our algorithm
2 views − our algorithm
0.5 0 0
0.2
0.2
Full view − Grau’s algorithm
2 views − Grau’s algorithm
0.5
0.2 0.4 0.6 0.8 2 views − our algorithm
−0.4 0
0
2 views − Grau’s algorithm
0 0
1
1 view − our algorithm
0.5 0 0
0.8
1 view − Grau’s algorithm
0.5
0.2 0.4 0.6 0.8 1 view − our algorithm
0.6
0
1 view − Grau’s algorithm
0 0
0.4
Full view − our algorithm
0.5 0 0
0.2
Full view − Grau’s algorithm
0.5
0.2 0.4 0.6 0.8 Full view − our algorithm
Ground Truth
0
Full view − Grau’s algorithm
0 0
Circumferential
1
−0.4 0
0.2
0.4
0.6
0.8
1
Figure 9: Median strain computed from ground truth displacements (top row), from Grau’s algorithm and from our algorithm using the full view (2nd and 3rd row), one of the partial views in Fig. 7 (4rth and 5th row) and the two partial views (6th and 7th rows). Strain curves are shown for radial (left), longitudinal (central) and circumferential (right) directions. The black dashed curves show the ground truth. Vertical bars indicate dispersion of the myocardium strain values as measured by the interquartile range. The horizontal axis is the normalized cardiac time (from 0 to 1).
between tracked and reference surfaces. Similarly, Grau et al. (2008) computed Dice coefficients and mean surface distance to measure the agreement between tracked and ground truth (chosen here as the end18
t = 0.15
t = 0.35
t = 0.45
t = 0.75
Figure 10: Evolution of longitudinal strain values over the cardiac cycle (at normalized t = 0.15, 0.35, 0.45 and t = 0.75, from left to right) for the full view (top), the partial view (middle), and the multiview dataset (bottom) sequences.
diastolic endocardial surface of one of the apical sequences). They compared the tracking performance when using both views and single views. Strong arguments can be made that a fusion algorithm should be evaluated according to its target application. In our case, we targeted accurate strain quantification, for which there is neither a standard validation measure except for direct comparison, if available, with strain from tagged magnetic resonance imaging as reference. Yet, in healthy volunteers, it is expected that all regions contract synchronously with a similar amplitude (Edvardsen et al. (2002)). Hence, despite the lack of a standard strain validation measure, a smaller dispersion in strain values across the AHA segments can be regarded as an improvement in the strain accuracy since regional variations in strain are small in normal myocardium. We measured this dispersion using the interquartile range across all AHA segments (except apex). The median value and dispersion of the resulting distribution are considered descriptors of the homogeneity over all studied 19
Table 3: Clinical datasets description in terms of number and type of views L
apical
parasternal
non-standard
Volunteer 1
5
3
2
Volunteer 2
12
5
5
2
Volunteer 3
6
3
3
-
Volunteer 4
12
6
3
3
Volunteer 5
20
10
10
-
Volunteer 6
13
13
-
-
Volunteer 7
9
9
-
-
Volunteer 8
12
8
-
-
CRT Candidate
3
3
-
-
-
L: total number of input views (L=#apical+#parasternal+#non-standard)
segments. For each volunteer, the dispersion was measured for the radial, longitudinal and circumferential strain obtained from the reference view sequence, the fused sequence and the corresponding multiview dataset (Table 3). For the fused sequence and the multiview datasets, strain values were quantified for three different choices of weighting schemes: averaging, maximum selection rule, and the feature-consistency based approach described in Yao et al. (2011). As in previous works in multiview fusion, to construct the fused sequence, images were combined at each cardiac phase n and in the cases where the number of images was different, the minimum number of images N l , l = 1, . . . , L, was kept. Note that for the reference view and the fused sequence, we have a single view from which we will compute the strain (particularizing our algorithm to L = 1, hence not taking advantage of the multiview registration), whereas for the multiview sequence we have several views on which we will apply our multiview algorithm to exploit the information from all the multiple views. Fig. 11 shows for every study group boxplots of radial, longitudinal and circumferential strain dispersion (considering all the evaluated segments from all volunteers). In each box, the central mark is the median, the edges of the box are the first and third quartiles, and the whiskers extend to the most extreme data values. These distributions did not satisfy the conditions of normality and homoscedasticity. Hence, comparisons between the reference dispersion distribution and each of the other group distributions were done using the Welch’s test (i.e., unequal variance t-test) which is quite robust against non-normality. We did not find significant differences (p < 0.05) between the median values of the reference and the fused groups, while we did find that the median values of the reference and the multiview groups were significantly different in all cases except for the longitudinal strain dispersion obtained when using the maximum selection rule for the weighting scheme. Under the assumption of homogeneous strain patterns in normal myocardium, these results suggest that strain quantification from several views increases strain accuracy. Interestingly, strain dispersion from the fused sequences was in general smaller but not significantly different from the reference 20
one. This smaller improvement compared to the multiview approach is likely due to the errors introduced by the fusion process and the disregard of temporal information (Section 1). 0.16
0.4
0.25
0.14
0.2
0.12
0.3
0.1
0.15
0.08
0.2
0.1
0.06 0.1
0.04
0.05
0.02
0 R
Fa
Fm
Ff
Ma Mm Mf
(a) Radial strain dispersion
R
Fa
Fm
Ff
Ma Mm Mf
(b) Longitudinal strain dispersion
0
R
Fa
Fm
Ff
Ma Mm Mf
(c) Circumferential strain dispersion
Figure 11: Boxplots of the strain dispersion distributions from all volunteers as measured from the reference view sequence (‘R’), the fused sequences (‘Fa ’, ‘Fm ’ and ‘Ff ’) and the multiview dataset (‘Ma ’,‘Mm ’ and ‘Mf ’). For the fused and multiview datasets, three weighting schemes were considered: the average, the maximum selection rule and Yao’s feature-based one, respectively corresponding to subindexes ‘a’, ‘m’ and ‘f ’.
Sample results of the recovered strain curves are shown in Fig. 12. The curves correspond to volunteer #2. For this example, Yao’s feature-based weighting scheme was used for obtaining the fused sequence and for weighting the metric (equations (5), (7) and (9)) in the multiview approach. For easier visualization, only the mid and basal segments are shown. Left column shows the strain obtained from the reference view. One can observe non-physiological strain patterns in some of the AHA segments, specially in the radial and circumferential directions. This goes along with the fact that the reference is an apical view. Since SNR is better along the US beam, longitudinal strain is more accurately estimated from apical views while radial and circumferential strain are usually better assessed from parasternal views. As it can be seen from Fig. 12(d), strain patterns in the longitudinal direction look normal except for the basal septal and basal anterolateral regions, which show almost no deformation. Strain curves recovered from the fused sequence are depicted in the middle column of Fig. 12. Notably, radial strain is very inhomogeneous, exhibiting instances of wall thinning and thickening occurring simultaneously at different AHA segments, as it also occurred for the reference view. Interestingly, both longitudinal and circumferential strains show a more uniform pattern than in the reference, having corrected some of the abnormal strain patterns seen in Fig. 12(d) and Fig. 12(g). Finally, strain curves obtained from the multiview dataset are shown on the right column of Fig. 12. One can see how non-physiological strain patterns have been normalized in most cases, resulting in a smaller variability of strain and less abnormal strain patterns. Strain results can also be visualized as colored surface maps. Fig. 13 shows the color maps of the radial strain along time corresponding to the reference view, the fused view (using Yao’s fusion approach) and our multiview algorithm, applied to the same volunteer than in Fig. 12. 21
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
−0.1
−0.1
−0.1
0
0.2
0.4
0.6
0.8
1
0
(a) Radial strain from reference
0.2
0.4
0.6
0.8
1
(b) Radial strain from fused
0.05
0
0
0
−0.05
−0.05
−0.05
−0.1
−0.1
−0.1
−0.15
−0.15
−0.15
−0.2
−0.2
−0.2
−0.25
−0.25
−0.25
0.4
0.6
0.8
1
0
(d) Long. strain from reference
0.2
0.4
0.6
0.8
0
1
(e) Long. strain from fused
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
−0.1
−0.1
−0.1
−0.2
−0.2
−0.2
0.2
0.4
0.6
0.8
1
(g) Circ. strain from reference
0.8
1
0
0.2
0.4
0.6
0.8
(h) Circ. strain from fused
0.2
0.4
0.6
0.8
1
(f) Long. strain from multiview
0.3
0
0.6
0.05
0.05
0.2
0.4
(c) Radial strain from multiview
0
0
0.2
1
0
0.2
0.4
0.6
0.8
1
(i) Circ. strain from multiview
Figure 12: Myocardial strain quantified for volunteer #2 when using the reference view (left), the fused sequence (middle) and the multiview dataset (right). Yao’s feature-based weighting scheme was used for the construction of the fused sequence and for balancing the views in the multiview dataset. The AHA segments are labelled according to the legend given in Appendix D. Thick blue curve corresponds to the mean strain. The horizontal axis is the normalized cardiac time (from 0 to 1).
3.2.2. Patient strain quantification We present here an example of potential clinical application where multiview registration could help in quantitative assessment of mechanical dyssynchrony. For CRT responders, improvements both in strain magnitude and synchronization among the AHA segments is expected after the implantation of the pacemaker (Bertola et al. (2009); Klimusina et al. (2011)). Fig. 14 shows boxplots of radial, longitudinal and circumferential strain dispersion of the patient before 22
t = 0.15
t = 0.35
t = 0.45
t = 0.75
Figure 13: Evolution of radial strain values over the cardiac cycle (at normalized t = 0.15, 0.35, 0.45 and t = 0.75, from left to right) for the reference view (top), the fused view (middle), and the multiview dataset (bottom) sequences.
(OFF) and after (ON) the implantation. Dispersion was computed in the same way as for the volunteers. As before, these distributions did not follow a normal distribution nor had they equal variance, and hence, within each group (reference, fused and multiview), the null hypothesis of equal median between OFF and ON was computed using the Welch’s test. A significant change (p < 0.05) in median was found in all cases except for the circumferential strain dispersion in the fused sequence. A lower p-value reflects stronger evidence of mean dispersion differences between OFF and ON. Fig. 15 shows the recovered circumferential strain curves for the patient at OFF and ON. Results are shown for the reference view (left), the fused sequence (middle) and the multiview dataset (right). Yao’s feature-based weighting scheme was used for obtaining the fused sequence and for weighting the metric in the multiview approach. In all three cases, negative and positive strain values coexist at both OFF and 23
p=0.034
p=0.007
0.6 0.5
p=0.014
0.4
0.4
0.3
0.3
p=0.006 p=0.07
p