Optical Flow Processing for Sub-pixel Registration of Speckle Image

0 downloads 0 Views 687KB Size Report
Keywords: Optical flow, Subpixel, Kalman filter, Adaptive spatial filter, Speckle image. 1. ... The algorithm is applied on images that underwent both artificial.
Optical Flow Processing for Sub-pixel Registration of Speckle Image Sequences Corneliu Cofarua , Wilfried Philipsa and Wim Van Paepegemb a Ghent

University, Telin - IPI - IBBT, St-Pietersnieuwstraat 41, B-9000 Gent, Belgium; University, Department of Mechanical Construction and Production, St-Pietersnieuwstraat 41, B-9000 Gent, Belgium;

b Ghent

ABSTRACT In recent years digital image processing techniques have become a very popular way of determining strains and full-field displacements in the field of experimental mechanics due to advancements in image processing techniques and also because the actual process of measurement is simpler and not intrusive compared to traditional sensor based techniques. This paper presents a filtering technique which processes the polar components of the image displacement fields. First, pyramidal gradient-based optical flow is calculated between blocks of each two frames of a speckle image sequence while trying to compensate in the calculation small rotations and shears of the image blocks. The polar components of the resulting motion vectors - phase and amplitude - are then extracted. Each of the motion vector angle values is smoothed temporally using a Kalman filter that takes into account previously calculated angles located at the same spatial position in the motion fields. A subsequent adaptive spatial filter is used to process both the temporally smoothed angles and amplitudes of the motion field. Finally, test results of the proposed method being applied to a speckle image sequence that illustrates plastic materials being subjected to uniaxial stress and to artificial data sets are presented. Keywords: Optical flow, Subpixel, Kalman filter, Adaptive spatial filter, Speckle image

1. INTRODUCTION Digital Image Correlation (DIC) has become an important and widely used optical method in experimental mechanics for measuring displacements and strains of materials that undergo physical deformation. Its advantages are that it can provide a full field 2D displacement field using a relatively simple mechanical setup and without interfering with the material itself as opposed to the classical strain gauge approach which gives information about a specific area of the specimen, it is difficult to bond on some materials and it might debond at higher strains. The basic principle of DIC relies on comparing an image of the specimen before the deformation with one after the deformation and extracting the motion information between the two by minimizing a chosen similarity function. Various methods for obtaining the displacements have been developed after the pioneering work of Sutton et al.1, 2 for improving the performance and accuracy of the original algorithm. The methods involve intensity interpolation3, 4 , correlation function interpolation or surface fitting5, 6 , double Fourier Transform7–9 , Newton-Raphson iterations10–12 or image intensity gradient methods13, 14 . Image gradient methods are similar to the Lucas-Kanade optical flow algorithm15 , extracting the motion information form the spatio-temporal derivatives of small image blocks (also called subsets) through the minimization of a quadratic error function between the deformed and reference image block. They provide a good compromise between speed and accuracy16 making feasible the use of more advanced image processing techniques for increased accuracy. These may include the addition of regularization terms, robust error functions or using information from more than two frames17–22 . In this paper the image gradient method is used to calculate the displacements between each two consecutive frames of an image sequence on a two level Gaussian pyramid23 . The angle of each motion vector of the resulted Further author information: (Send correspondence to C. C.) C.C.: E-mail: [email protected], Telephone: +32 9 264 3416 W.P.: E-mail: [email protected], Telephone: +32 9 264 3385 W.V.P.: E-mail: [email protected], Telephone: +32 9 264 4207 Applications of Digital Image Processing XXXI, edited by Andrew G. Tescher, Proc. of SPIE Vol. 7073, 70731A, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.797031

Proc. of SPIE Vol. 7073 70731A-1 2008 SPIE Digital Library -- Subscriber Archive Copy

motion field is then temporally smoothed using a Kalman filter that takes into consideration previously obtained motion vector angles in the same spatial position. Subsequently, both angles and amplitudes of the motion vectors are filtered with an adaptive spatial filter in order to reduce spatial inconsistencies. Section 2 of the paper describes the optical flow calculation within each of the pyramid’s levels and Section 3, the proposed temporal and spatial filter formulations. The algorithm is applied on images that underwent both artificial deformation, for which the displacements are known, and real mechanical deformations, with the results being presented in Section 4 and the conclusions in Section 5.

2. MOTION ESTIMATION Obtaining motion information through classical optical flow techniques relies on the assumption that image brightness patterns remain constant in time changing only position. Let I(x, y, t) be an image sequence with x and y the spatial coordinates and t the temporal one. The stationarity of the image brightness can be expressed as follows: (1) I(x + dx , y + dy , t + 1) = I(x, y, t) , where dx (x, y) = ax x + bx y + cx

,

(2)

dy (x, y) = ay x + by y + cy

(3)

are the affine horizontal and vertical displacement models of an image pixel located at coordinates (x, y) in the images. The model has been widely used17, 24–26 and includes support for small rotation, shearing or scaling modifications along with the translational component. For image blocks small enough to assume Eqs. (1), (2) and (3) valid, a first order Taylor expansion of Eq. (1) results in the following optical flow constraint: Ix dx + Iy dy + It = 0

,

(4)

where Ix , Iy and It denote the spatial and temporal partial derivatives of the image block. Solving Eq. (4) using a weighted least-squares method for two temporally consecutive blocks I (Ω) (t) and I (Ω) (t + 1) of size N × N pixels that occupy the same spatial positions (denoted by the superscript Ω) in the corresponding images of the image sequence, is the equivalent of solving: AT W 2 A · p = AT W 2 It 

where A=

(Ω)

 p=

(Ω)

yIx

xIx

(Ω)

ax

(Ω)

bx

(Ω)

Ix

(Ω)

cx

(Ω)

(Ω)

(5)

(Ω)

xIy ay

,

yIy (Ω)

(Ω)

by

cy   It = I (Ω) (t + 1) − I (Ω) (t)

(Ω)

 ,

Iy

T

,

(6) (7) (8)

and W a N 2 × N 2 diagonal weight matrix. Both the elements of A and It were obtained by scanning the spatial and respectively temporal derivative values of the block in a raster order and placing them in a single column. The solution p of Eq. (5) is given by: p = (AT W 2 A)−1 AT W 2 It

,

(9)

which is solved in closed form when AT W 2 A is nonsingular. Because of the aperture problem AT W 2 A is usually singular - this inconvenience is greatly reduced when working with highly textured images such as speckle images provided that the image block sizes used for optical flow computation are bigger than the maximum size of the speckles. Because within an image block several motions can be present as well as no motion regions (in the case of the speckle images, holes, cracks or borders of the specimen) and normal least squares methods average the motion over all pixels of the block, the weight matrix W can be used to increase the robustness of the solution by giving

Proc. of SPIE Vol. 7073 70731A-2

less importance to pixels that do not fit the overall motion. This is done by first calculating the displacement between the blocks I (Ω) (t) and I (Ω) (t + 1) with Eq. (9) using pixel weights equal to 1. I (Ω) (t + 1) is warped to compensate the calculated motion and the absolute pixelwise difference between it and I (Ω) (t) is used to create the weight matrix in a way that weighs pixels in inverse proportion to the magnitude of the errors present at their respective location: (10) ε(Ω) = |I (Ω) (t) − I˜(Ω) (t + 1)|  1 if ε(Ω) (x, y) ≤m w(Ω) (x, y) = (11) (Ω) m/ε (x, y) if ε(Ω) (x, y) >m where I˜(Ω) (t + 1) is the motion compensated I (Ω) (t + 1), x and y are pixel coordinates inside the domain Ω, ε(Ω) (x, y), w(Ω) (x, y) are the error and the weight corresponding to the pixel located at (x, y) and m is the median of ε(Ω) . Once the weight matrix for a block has been determined, the motion is again recalculated using the updated weight values. The Taylor expansion of the intensity constancy constraint in Eq. (1) reduces the measured displacements range to approximately one pixel on both x and y directions with accuracy decreasing sharply for larger displacements. To overcome this inconvenience, a multiresolution scheme has been implemented using a two level Gaussian pyramid for each two consecutive speckle images I(x, y, t) and I  = I(x + dx , y + dy , t + 1) in the sequence. Displacements between the blocks of I and I  are calculated using Eq. (9) first at the second, coarser grid and then used to create through bicubic interpolation a motion compensated I  at the finer grid, I˜ . Optical flow is recalculated between I and I˜ and the displacements from both levels are adequately added.

3. TEMPORAL AND SPATIAL PROCESSING The method presented in Section 2 and optical flow algorithms in general are very sensitive to lighting variations, image noise and the discretization processes that are associated with derivation and digital cameras. The nature of the resulted errors is therefore hard to estimate unless a ground truth measure is present. This section presents the models for the temporal and spatial filters applied to the motion field components that aim at minimizing the optical flow errors, the motivation behind their choice, related assumptions and implementation details being detailed in Section 4.

3.1 The Temporal Kalman Filter Considering only the motion vectors from a fixed spatial position Ω in the motion vector field sequence, their angles and uncertainty covariances over time, a simple angle state model is proposed: Θt+1 = Θt + ξ1

(12)

Ψt = HΘt + ξ2

(13)

where H = 1, Θt , Ψt are the ideal and calculated vector angles, ξ1 , ξ2 are the process and observation noise. The noise variables are assumed to be independent of each other with Gaussian distributions of zero mean and covariances Q and R respectively. ˆ −, ˆ t−1 , the a priori estimate of the angle at t, Θ At each time step t, assuming a previous angle estimate Θ t − signal error covariance Pt and Kalman gain are calculated: ˆ− = Θ ˆ t−1 Θ t

(14)

Pt− = Pt−1 + Q

(15) −1

Kt = Pt− H T (HPt− H T + R)

(16)

ˆ+ After the motion vector angle at step t, Ψt , is obtained, the a posteriori estimate of the angle, Θ t , is calculated and the error covariance updated: ˆ+ = Θ ˆ − + Kt (Ψt − H Θ ˆ −) Θ (17) t

t

t

Proc. of SPIE Vol. 7073 70731A-3

Pt = Pt− − Kt HPt−

(18)

The a posteriori estimate of the angle will serve as an a priori guess for the one at the next temporal step and as input for the spatial filter while the error covariance will contribute in the next Kalman gain that tries to ˆ + through the Kalman minimize it. The manner in which each calculated angle Ψt affects the final estimate Θ t gain depends on the error and noise covariances P and R respectively: small values of the apriori error covariance Pt− give more importance to the a priori estimate and less to the actual calculated value while small values of the measurement noise covariance generate the opposite behavior.

L_v/

El

fin nfl

El

M.E

Speckle Image Sequence

Motion Vector Field

Angle estimate for current step

Figure 1. Filtering process flow. M.E - Motion Estimation; S.M.F - Spatial Modulus Filter; K.A.F - Kalman Angle Filter; S.A.F - Spatial Angle Filter (same as S.M.F).

3.2 Spatial Filtering If the blocks used to calculate the optical flow are chosen small and close enough, both the amplitudes and angles of neighboring motion vectors can be assumed to have a linear relationship between them. Considering in this case the angles of a N × N neighborhood in the motion field affected by additive zero mean noise within the motion vector field, the following adaptive filter formulation based on the spatial 2D Wiener filter27 can be expressed: σ2 − υ2 (19) Θf = µΘ + Θ 2 Θ (ΘL − µΘ ) σΘ 2 with µΘ and  2σΘ the mean and variance calculated around each motion vector angle of the neighborhood and 1 2 σΘ the neighborhood noise variance estimate. The novelty of the filter resides in the term ΘL which υΘ = N 2 is calculated as follows: a linear least squares fit is done within a 3 × 3 angle neighborhood on the horizontal, vertical and two oblique directions. The four fitted values located in center position are then weighted in direct proportion to how close is each of them to the other fitted values on the corresponding direction and then added:

ΘL =

4 

ˆi Pi Θ

(20)

i=1

with the weighing factor Pi :

1 − ααMi Pi = 4 αi i=1 1 − αM

(21)

where αM = 90 and αi represents the degree of similarity between the fitted values on one direction with values between 0 for identical values and 90 for strong dissimilarities between the three values. In a graphical interpretation, αi is the angle made by the fitted linear function with the x cartesian coordinate axis. The placement of the spatial and temporal filters within the processing framework can be observed in Figure 1.

Proc. of SPIE Vol. 7073 70731A-4

4. RESULTS 4.1 Artificial Deformation Tests Usually it is very difficult to verify the reliability of the calculated motion vectors from the various optical flow techniques if no ground truth measures are used. In most cases, synthetic computer generated frame sequences for which the displacements are known are used in evaluating the performance of the algorithms. The same approach can be adopted when evaluating DIC algorithms14, 16 : artificial speckle images are generated and then deformed through some form of interpolation thus obtaining a relatively reliable ground truth. The disadvantage of such methods is that they rely upon speckle patters that may be hard to reproduce qualitatively in real environments thus reducing the evaluation of the algorithm to a purely theoretical study. It is for this reason that the artificial displacements in this paper will be applied to an image from the sequence that contains actual mechanical deformation of the material. A speckle image of size 1000×1000 pixels as in Figure 2 is deformed

Figure 2. Detail of the speckle image used in the synthetic deformation tests.

using bicubic interpolation according to the following displacement models: translation on one direction (vertical and horizontal translations are equivalent) with 0.1, 0.3, 0.5, 0.7 and 0.9 pixels, translations on two directions with both horizontal and vertical displacements equal to each other and of values 0.1, 0.3, 0.5, 0.7 and 0.9 pixels, a linear displacement model as in Eqs. (2) and (3) and a quadratic displacement model as in Eqs. (22) and (23). The motion is calculated in each of the test cases using blocks of sizes 8×8, 16×16 and 32×32 and a fixed step size between blocks of 8 pixels. dxx (x, y) = ex x2 + fx y 2 + gx xy + ax x + bx y + cx

,

(22)

dyy (x, y) = ey x2 + fy y 2 + gy xy + ay x + by y + cy

(23)

with the displacement parameters from Table 1: Table 1. Parameters used in the synthetic image deformation.

Displacement

ex /ey

fx /fy

gx /gy

ax /ay

bx /by

cx /cy

Linear (dx )







−1e-5

2.7e-4

0.5

Linear (dy )







6e-4

0

0.3

Quadratic (dxx )

−1e-8

1e-6

−5e-7

0

0

−0.2

Quadratic (dyy )

2e-7

−1e-7

−1e-7

1e-6

0

0.4

Proc. of SPIE Vol. 7073 70731A-5

Because the displacements in the theoretical motion field depend upon each pixel’s coordinates, the resulting theoretical displacement field will have the same resolution as the image. As direct consequence, a comparison between it and the optical flow result is not possible due to the fact that optical flow is calculated on blocks of the image. Therefore, the theoretical motion field’s size is reduced by doing an identical partition as that of the optical flow algorithm and averaging the motions inside each of the blocks. For graphical illustration of the results, three representative test cases are shown: horizontal and vertical translations of 0.5 pixels (Figure 3), the linear (Figure 4) and the quadratic displacements (Figure 5). A first and straightforward conclusion is that by increasing the size of the block on which motion is calculated, accuracy is improved. The high angle and modulus variations present when using 8×8 blocks may be in great proportion attributed to the large, 2 to 10 pixels, speckle diameter ranges14 and small lighting variations across the specimen’s surface and between frames. As the window size increases so does the accuracy, with the disadvantage of possibly oversmoothing discontinuities in the motion field. The accuracy improvement is shortly exemplified in Table 2. Table 2. Block size influence on the mean (µ) and standard deviation (σ) of the optical flow errors in five test cases: horizontal translation of 0.1 and 0.5 pixels, horizontal and vertical translation of 0.1 and 0.5 pixels and linear motion models. µ and σ are expressed in degrees for the angles and pixels for the modulus.

Angles [σ/µ] [◦ ]

Displacement

Modulus [σ/µ] [×10−2 pixels]

8×8

16×16

32×32

8×8

16×16

32×32

Tr. 0.1 pixels

5.039/-0.183

1.804/-9e-3

0.829/4e-3

1/-3e-2

0.37/ − 0.1

0.16/-0.13

Tr. 0.5 pixels

3.521/-0.35

0.984/-0.393

0.365/2e-4

4.2/-1.9

1.7/-3.1

0.75/-3.1

Tr. 0.1×0.1 pixels

4.48/0.21

1.51/0.094

0.648/-0.029

1.1/0.047

0.37/-0.11

0.16/-0.17

Tr. 0.5×0.5 pixels

5.65/0.41

1.45/0.107

0.621/-0.185

6.7/0.18

1.8/-2.7

0.78/-3.1

4.442/1.578

1.633/1.572

0.961/1.586

7/1.6

2.8/0.6

1.93/0.61

Linear

a)

b)

c)

d)

a)

b)

c)

d)

Figure 3. Angle (upper figures) and modulus (lower figures) fields for the 0.5 pixel horizontal and vertical translation calculated with b) 8×8, c) 16×16 and d) 32×32 pixel blocks, with the theoretical fields in a).

Analysis of the angle and modulus errors showed that if linear or translational movement is present with

Proc. of SPIE Vol. 7073 70731A-6

a)

b)

c)

d)

a)

b)

c)

d)

Figure 4. Angle (upper figures) and modulus (lower figures) fields for the linear displacements calculated with b) 8×8, c) 16×16 and d) 32×32 pixel blocks, with the theoretical fields in a).

a)

b)

c)

d)

a)

b)

c)

d)

Figure 5. Angle (upper figures) and modulus (lower figures) fields for the linear displacements calculated with b) 8×8, c) 16×16 and d) 32×32 pixel blocks, with the theoretical fields in a).

the amplitudes in each direction lower than 0.7 pixels when using 8×8 blocks and 0.9 pixels for the 16×16 or 32×32 pixel blocks, the errors are normally distributed. For larger motions or more complicated models like the quadratic one, the errors may not be Gaussian. This may be due to the presence of systematic errors that arise from the motion model not fully corresponding to the deformation function28 , the weight matrix W excluding valid data points from contributing to the motion calculation and use of the multiresolution approach that is highly sensitive to the quality of the initial, coarse grid motion compensation. In Figure 6 it is shown that in the case of the 0.5 pixel horizontal and vertical translations the angles clearly fit a normal probability distribution,

Proc. of SPIE Vol. 7073 70731A-7

01

0000

0 01 01

PP 0000 -' -la ( 0

ggooo-

0

0 )

ty

0

J

I P

01

I

11111

oooo0 00100

PP

0000 0

0 0

Probability

01 01

0000 0 II II (aII0 (I PP

00100

PP

0000 0

0 0

Probability

0

0000 00

0

—pa

0

PP

00100 01

O

0 O

-'ro

0

0

PP

-'pa

O

0

00_a_a •o•-'

O

0000 0i

0

Probability

0

0000

00

PP

+*

PP

PP

00100

0000

01

•J

0

0 •( 0

Probability

0 i)

0000 PP

0000 PP D(DCD 0

0 •( 0

Probability

0 i)

0000 PP

Figure 6. Normal cumulative probability plots of angle errors for the horizontal and vertical 0.5 pixel displacement (upper row) and quadratic model displacement (lower row) calculated with a) 8×8, b) 16×16 and c) 32×32 blocks. The dotted line represents the ideal normal distribution fit through the data points.

with the quality of the fit increasing with the block size, while in the case of the quadratic displacement model the hypothesis of normally distributed errors holds only for the 8×8 blocks. The spatial errors in the motion fields can be considered in most cases additive zero mean noise with variance varying as a function of the displacement magnitude, thus substantiating the hypothesis behind the use of the spatial filter. The normal nature of the errors motivates also the use of the Kalman filter as an optimal temporal estimator for the angles. This is based on the observation that angles from a fixed image position do not vary

Proc. of SPIE Vol. 7073 70731A-8

0

0

−5

−5

−10

−10

−15

−15

−20

−20

−25

−25

−30

0

5

10

15 a)

20

25

30

−30

0

0

−5

−5

−10

−10

−15

−15

−20

−20

−25

−25

−30

0

5

10

15 c)

20

25

30

−30

0

5

10

15 b)

20

25

30

0

5

10

15 d)

20

25

30

Figure 7. Filtering results calculated using 8×8 blocks: a) Detail (lower left area) of original motion vector field , Kalman filter output b), spatial angle filter c), spatial angle and modulus filter d)

significantly in short periods of time if noise is taken into account, reducing the temporal statistics of the angle variations to spatial ones: the motion is calculated at each temporal step between different blocks that belong to the same frame with resulting angle values varying linearly or being quasi constant.

4.2 Real Deformation Tests For the real deformation tests, the algorithm described in Sections 2 and 3 was used to calculate and process the optical flow for a 300 frame sequence that represented a strip of plastic material that underwent uniaxial physical stress in an upward direction. Optical flow was calculated between each two consecutive frames using 8×8, 16×16 and 32×32 pixel blocks and then each resulted motion vector field is filtered using the temporal and spatial filters. Before optical flow calculation the images are smoothed with a Gaussian low pass filter. This has been shown to improve accuracy of the algorithm by eliminating noise and higher frequency components19, 22 . It has been determined empirically that higher accuracy was obtained by using kernels bigger than 10x10 pixels and small standard deviations (σ = 1 ∼ 1.5 pixels). In the initialization stage of the Kalman filter, the first T motion fields remain unfiltered, with the first ˆ T , when temporal smoothing starts, being the mean of the first T calculated angles, the a priori estimate Θ ˆ T and error covariance PT = 0. A value of T bigger than 10 is required observation noise variance R = ΨT +1 − Θ for a good initial angle estimate - a value of 12 was used in the actual implementation. The choice of the process noise variance Q depends on the smoothness assumptions made about the signal, with lower values, that signify lower angle variations between timesteps, leading towards heavier smoothing. In the experiments present in this paper, a constant value of 5 was used for all angles. The observation noise variance R is considered to be the variance of the last 12 unfiltered vector angles and varies from a spatial position to another.

Proc. of SPIE Vol. 7073 70731A-9

0

0

−5

−5

−10

−10

−15

−15

−20

−20

−25

−25 0

5

10

15 a)

20

25

0

0

−5

−5

−10

−10

−15

−15

−20

−20

−25

−25 0

5

10

15 c)

20

25

0

5

10

15 b)

20

25

0

5

10

15 d)

20

25

Figure 8. Filtering results calculated using 16×16 blocks: a) Detail (lower left area) of original motion vector field , Kalman filter output b), spatial angle filter c), spatial angle and modulus filter d)

Figures 7, 8 and 9 show a detail of the motion vector fields calculated between frames 30 and 31 of the sequence by using 8×8, 16×16 and 32×32 blocks and the output of subsequent temporal and spatial filtering. It can be noticed that in all cases temporal filtering improved the spatial consistency of the angle field by relying only on data from fixed spatial positions with further spatial filtering refining the results. Also, clearly noticeable is that when using the 32×32 blocks the effect of spatial smoothing is less visible because most of the errors are eliminated by the use of larger blocks in correlation with the Kalman filter.

5. CONCLUSIONS In this paper, a filtering technique for the polar components of optical flow motion vectors is presented. First displacements between speckle images that were artificially deformed are calculated in order to evaluate the nature of the errors that occur in the process. The speckle images were deformed using bicubic interpolation according to the general desired motion and after motion estimation, error statistics are extracted to motivate the use of the filters in the real deformation tests. The filters are then applied to a 300 frame sequence that illustrates a plastic specimen being deformed, the final results being consistent with the general assumptions about the real displacement field.

REFERENCES [1] Sutton, M. A., Wolters, W. J., Peters, W. H., and McNiell, S. R., “Determination of displacements using an improved digital correlation method,” Image Vision Computing 1, 133–139 (August 1983). [2] Chu, T. C., Ranson, W. F., and A., S. M., “Applications of digital-image-correlation techniques to experimental mechanics,” Experimental Mechanics 25, 232–244 (September 1985).

Proc. of SPIE Vol. 7073 70731A-10

LU LU

0 LU

N

0

'

N

N

¼\''

C

('U

\\\\\\'\\\\\\\\\\'\\v\\

o

LU

_\

C

\"','\N\'-,''\' ¼'

I

LU

\'\'%\\\'\"\'\\'\\'\\'\' 'N" "N'\' "N'"' "cN'v\\\\\\\\\' " \N \'\\'\\\N''\\\N"\'\' \\\\\\\N\N'NV\ \\N \\N

C

LU ('U

LU ('U

LU

(N

C

(N

LU

C

C

LU

(N

C

(N

LU

C

LU

C

LU

LU

LU

0

\\

N'.—"

0

\'\''\\\'N\V\N\NN NNNN

LU

•NNNN\ N

\\\\\\\\\\\\\\\\\\NN\\

0

N

\N''NN\' N"

"—'N."'

\

""N'""

N

C

('U

NNNNNN\\N' \N NNN' N'

C LU ''-

N'NN.NN N'N\\\ N"NN

I

LU

\\'\\\N\xN\N\N"'\N

C LU ('U

LU

(N

C

(N

LU

C

0

LU

(N

0

(N

LU

0

LU

C

0

Figure 9. Filtering results calculated using 32×32 blocks: a) Detail (lower left area) of original motion vector field , Kalman filter output b), spatial angle filter c), spatial angle and modulus filter d)

[3] Sutton, M. A., McNeill, S. R., Jang, J., and Babai, M., “Effects of subpixel image restoration on digital correlation error estimates,” Optical Engineering 27, 870–877 (October 1988). [4] Zhang, D., Zhang, X., and Cheng, G., “Compression strain measurement by digital speckle correlation,” Experimental Mechanics 39, 62–65 (March 1999). [5] Hung, P.-C. and Voloshin, A., “In-plane strain measurement by digital image correlation,” J. Braz. Soc. Mech. Sci. and Eng. 25, 215–221 (September 2003). [6] Wattrisse, B., Chrysochoos, A., Muracciole, J.-M., and Nmoz-Gaillard, M., “Analysis of strain localization during tensile tests by digital image correlation,” Experimental Mechanics 41, 29–39 (March 2001). [7] Chen, D. J., Chiang, F.-P., Tan, Y. S., and Don, H. S., “Digital speckle-displacement measurement using a complex spectrum method,” Applied Optics 32(11), 1839–1849 (1993). [8] Oriat, L. and Lantz, E., “Subpixel detection of the center of an object using a spectral phase algorithm on the image,” Pattern Recognition 31(6), 761–771 (1998). [9] Amodio, D., Broggiato, G. B., Campana, F., and Newaz, G. M., “Digital speckle correlation for strain measurement by image analysis,” Experimental Mechanics 43, 396–402 (December 2003). [10] Bruck, H. A., McNeill, S. R., Sutton, M. A., and Peters III, W. H., “Digital image correlation using newtonraphson method of partial differential correction,” Experimental Mechanics 29, 261–267 (September 1989). [11] Vendroux, G. and Knauss, W. G., “Submicron deformation field measurements: Part 2. improved digital image correlation,” Experimental Mechanics 38, 86–92 (June 1998). [12] Lu, H. and Cary, P. D., “Deformation measurements by digital image correlation: Implementation of a second-order displacement gradient,” Experimental Mechanics 40, 393–400 (December 2000). [13] Davis, C. Q. and Freeman, D. M., “Statistics of subpixel registration algorithms based on spatiotemporal gradients or block matching,” Optical Engineering 37, 1290–1298 (April 1998).

Proc. of SPIE Vol. 7073 70731A-11

[14] Zhang, J., Jin, G., Ma, S., and Meng, L., “Application of an improved subpixel registration algorithm on digital speckle correlation measurement,” Optics and Laser Technology 35, 533–542 (October 2003). [15] Lucas, B. D. and Kanade, T., “An iterative image registration technique with an application to stereo vision,” in [Proceedings of the 7th International Joint Conference on Artificial Intelligence], 674–679 (1981). [16] Bing, P., Hui-min, X., Bo-qin, X., and Fu-long, D., “Performance of sub-pixel registration algorithms in digital image correlation,” Measurement Science and Technology 17, 1615–1621 (May 2006). [17] Black, M. J. and Anandan, P., “The robust estimation of multiple motions: Parametric and piecewisesmooth flow fields,” Computer Vision and Image Understanding 63, 75–104 (January 1996). [18] Kim, Y.-H., Mart`ınez, A. M., and Kak, A. C., “Robust motion estimation under varying illumination,” Image and Vision Computing 23, 365–375 (April 2005). [19] Bruhn, A., Weickert, J., and Schn¨ orr, C., “Lucas / Kanade meets Horn / Schunck: Combining local and global optic flow methods,” International Journal of Computer Vision 61(3), 211–231 (2005). [20] Fleet, D. J. and Langley, K., “Recursive filters for optical flow,” IEEE Transactions on Pattern Analysis and Machine Intelligence 17(1), 61–67 (1995). [21] Black, M. J. and Anandan, P., “Robust dynamic motion estimation over time,” in [IEEE Conference on Computer Vision and Pattern Recognition], 296–302 (1991). [22] Barron, J. L., Fleet, D. J., and Beauchemin, S., “Performance of optical flow techniques,” International Journal of Computer Vision 12(1), 43–77 (1994). [23] Burt, P. J. and Adelson, E. H., “The laplacian pyramid as a compact image code,” IEEE Transactions on Communications COM-31,4, 532–540 (1983). [24] Grossmann, E. and Santos-Victor, J., “Performance evaluation of optical flow estimators: Assessment of a new affine flow method,” Robotics and Autonomous Systems 21, 69–82 (July 1997). [25] Bergen, J. R., Burt, P. J., Hingorani, R., and Peleg, S., “A three-frame algorithm for estimating twocomponent image motion,” IEEE Transactions On Pattern Analysis And Machine Intelligence 14(9), 886– 896 (1992). [26] Behar, V., Adam, D., Lysyansky, P., and Friedman, Z., “Improving motion estimation by accounting for local image distortion,” Ultrasonics 43, 57–65 (October 2004). [27] Lim, J. S., [Two-Dimensional Signal and Image Processing], Prentice Hall (1990). [28] Yaofeng, S. and Pang, J. H. L., “Study of optimal subset size in digital image correlation of speckle pattern images,” Optics and Lasers in Engineering 45, 967–974 (September 2007).

Proc. of SPIE Vol. 7073 70731A-12

Suggest Documents