Nonlinear inverse model for velocity estimation from an image sequence

4 downloads 0 Views 9MB Size Report
Dec 29, 2010 - Iterative equations with Gauss‐Newton and Levenberg‐Marguardt algorithms ..... Bight, east of the New Jersey coast and south of Long Island,.
JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 116, C06015, doi:10.1029/2010JC006924, 2011

Nonlinear inverse model for velocity estimation from an image sequence Wei Chen1 Received 29 December 2010; revised 22 March 2011; accepted 4 April 2011; published 22 June 2011.

[1] Velocity estimation from an image sequence is one of the most challenging inverse problems in computer vision, geosciences, and remote sensing applications. In this paper a nonlinear model has been created for estimating motion field under the constraint of conservation of intensity. A linear differential form of heat or optical flow equation is replaced by a nonlinear temporal integral form of the intensity conservation constraint equation. Iterative equations with Gauss‐Newton and Levenberg‐Marguardt algorithms are formulated based on the nonlinear equations, velocity field modeling, and a nonlinear least squares model. An algorithm with progressive relaxation of the overconstraint to improve the performance of the velocity estimation is also proposed. The new estimator is benchmarked using a numerical simulation model. Both angular and magnitude error measurements based on the synthetic surface heat flow from the numerical model demonstrate that the performance of the new approach with the nonlinear model is much better than the results of using a linear model of heat or optical flow equation. Four sequences of NOAA Advanced Very High Resolution Radiometer (AVHRR) images taken in the New York Bight fields is also used to demonstrate the performance of the nonlinear inverse model, and the estimated velocity fields are compared with those measured with the Coastal Ocean Dynamics Radar array. The experimental results indicate that the nonlinear inverse model provides significant improvement over the linear inverse model for real AVHRR data sets. Citation: Chen, W. (2011), Nonlinear inverse model for velocity estimation from an image sequence, J. Geophys. Res., 116, C06015, doi:10.1029/2010JC006924.

1. Introduction [2] Velocity estimation plays an important role in a number of ocean‐related activities and river dynamic studies. The monitoring, understanding, and measurement of fluid flows are a major scientific issue in environmental sciences and geosciences such as oceanography and climatology. Since direct measurement of surface velocity that requires a large amount of in situ data collected by ships or current meter has been a continuing problem in oceanography. Velocity estimation based on successive satellite‐ borne or airborne images is being used more extensively to extract information on the surface circulation of selected ocean or river areas. [3] Velocity estimation from an image sequence observed or recorded by different physical sensors is a well‐known general inverse problem in the fields of computer vision, geosciences, and remote sensing. It has been a fundamental component for many applications such as velocity or displacement estimation from motion, object tracking or rec1 Remote Sensing Division, Naval Research Laboratory, Washington, D. C., USA.

This paper is not subject to U.S. copyright. Published in 2011 by the American Geophysical Union.

ognition, medical image registration, advanced video editing, and surface current for oceanic dynamics studies. Oceanographers and computer scientists address the general problem of apparent motion from thermal or optical (visible band imagery) flow structures on image sequences and utilize conservation constraint of heat or optical flow to estimate the motion field. The pioneering works for solving such inverse problems by heat or optical flow equations [cf. Horn and Shunck, 1981; Lucas, 1984; Lucas and Kanade, 1981; Kelly, 1989] or feature tracking algorithms [cf. Leese and Novak, 1971; Emery et al., 1986] known as the differential techniques or the Maximum Cross Correlation (MCC) can be found in literature. Scientists in the very active fields have focused their efforts on extending their methods [cf. Barron et al., 1994; Black and Anandan, 1996; Bruhn et al., 2005; Papenberg et al., 2006; Mercatini et al., 2010; Garcia and Robinson, 1989; Kamachi, 1989; Tokmakian et al., 1990; Emery et al., 1992; Holland and Yan, 1992; Wu, 1995; Wu and Pairman, 1995; Wu et al., 1992; Flores et al., 1995; Borzelli et al., 1999; Domingues et al., 2000; Prasad et al., 2002; Bowen et al., 2002; Moulin, 2004; Alberotanza and Zandonella, 2004; Dransfeld et al., 2006; Crocker et al., 2007; Matthews and Emery, 2009]. [4] Unfortunately, the differential techniques with the heat or optical flow equation are very sensitive to noise and not good in estimating large displacement vectors for longer

C06015

1 of 14

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

C06015

design for this estimator. In section 4, we deal with the validation of the new algorithms by deriving velocity from synthetic tracer motion within a numerical ocean model and apply the new technique to four thermal data sets. Finally, conclusions are drawn in section 5.

2. Nonlinear Model

Figure 1. A plot of a motion particle trajectory, instant velocities at time t 1 and t 2 , and a displacement vector between initial and final positions at time t1 and t2. temporal range with remote sensing image data. Marcello et al. [2008] performed evaluation and detailed studies of popular motion estimation techniques used in computer vision field for tracking oceanographic thermal structures. Their works indicate that Lucas and Kanade [1981] and Lucas [1984] is the only differential approach tested that provided a reasonable error performance in tracking geography flow motion, whereas Black and Anandan [1996] achieved only angular accuracy. The most popular MCC algorithms [Emery et al., 1986] with remote sensing data produce acceptable results. [5] An alternate strategy was proposed by Chen et al. [2008] to solve the inverse problem by representing the optical or heat flow with bilinear motion field model (Lucas [1984] and Lucas and Kanade’s [1981] method can be considered as a zero‐order special case). The velocity is chosen as an optimal fit to the heat flow equation and is thus globally valid over the image domain. Simultaneous solutions for this field and the velocity yield a Global Optimal Solution (linear model of the GOS). A GOS of higher‐order continuity in which the velocity field is expanded by surface B‐Splines function to solve the optical or heat flow equation is also developed by Chen [2010]. [6] Nevertheless, the GOS algorithms using heat or optical flow equation are derived from a differential form of the conservation constraint (Taylor first‐order expansion) and hold only for an infinitesimal motion. Therefore the motion estimation based on the heat or optical flow equation can be obtained successfully for small displacement motion only. To improve the performance of the velocity estimation from an image sequence, we utilize a temporal integral form of the heat or optical flow conservation constraint equation to replace the differential form and create a nonlinear system in this paper for velocity estimation. [7] When using the nonlinear model to deals with a problem of global nonlinear minimization with huge number of unknown parameters, there exist numerous local minima in image data applications. To avoid obtaining a local minimum solution, an adaptive framework for finding a global optimal solution with iteration approach and an algorithm of progressive relaxation of over constraint is proposed. [8] This paper is organized as follows. In section 2, a set of system equations with the nonlinear inverse model and iteration equations for a numerical solution are derived. Section 3 introduces algorithms that are applied to the

2.1. Displacement and Velocity Fields [9] The estimated motion vector from two successive images is a displacement vector (or average velocity in the time interval) between the initial and final positions of a particle in the scene as shown in Figure 1. For a longer temporal range, a realistic particle may have very complex motion trajectory as shown in Figure 1. The direct velocity measures recorded at an instant or close to instant time may have a very different direction and magnitude by comparison to the estimated velocity (v = Dr/Dt) as shown in Figure 1 because, in general, the estimated velocity is not equal to the instant velocity v¼

Dr Dr 6¼ lim ; Dt Dt!0 Dt

where Dr and Dt are a displacement vector and a time difference. We can only estimate the mean value of the velocity between two points of the motion during time from t to t + Dt because the observed motion in our case is determined only by the initial and final configurations from two successive images. All information about the motion such as changing rate and path for a particular physical particle in a scene are lost as shown in Figure 1. In ideal case, the velocity field changes slowly in a time interval, or the temporal range is short enough such that the average velocity is close to an instant velocity and the conservation constraint of the intensity holds. [10] The motion field estimated from successive AVHRR images or ocean‐color imagery spaced several hours apart, and the field is relevant to the important question of surface (or near‐surface) drift velocity. One of applications would be drift of pollution (e.g., an oil plume). 2.2. Conservation Constraint of Heat or Optical Flow [11] The image intensity I(r, t) that can be measured by brightness (optical flow from visible band images) or temperature (heat flow from thermal images) is a scalar function of position coordinates r(t) and time t. If the intensity of heat or optical flow is conserved, then we have equation under the conservation constraint [Horn and Shunck, 1981; Kelly, 1989] as follows: dI ðrðtÞ; tÞ ¼ 0; dt

ð1Þ

where operator dtd denotes a total derivative with respect to the time t. The differential form of the (tracer) conservation constraint equation (1) (i.e., the heat or optical flow equation) with the material derivative operator can be written as

2 of 14



 @ þ v  r I ðrðt Þ; tÞ ¼ 0: @t

ð2Þ

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

C06015

Figure 2. (a) The node points (labeled in red and block size nx = ny = 4) of the bilinear approximation with indices p and q are mapped on an image scene. (b) Original image function (top 3‐D mesh plot with 29 × 29 pixels) and its bilinear approximation (bottom 3‐D mesh plot interpolated from 15 × 15 sampled original function) when block size nx = ny = 2. All function values of off‐node when i ≠ p(i) and j ≠ q( j) in the image plane can be calculated based on the sampled function values on node points when i = p(i) and j = q( j).

[12] The differential form of equation (2) which has linear terms of the components of the velocity holds only for infinitesimal or close infinitesimal motions. Using the two successive frames, we can estimate partial derivatives with respect to time in (2). But equation (2) is not exactly associated with the motion states provided by the two successive frames especially for large‐scale displacement motion between the frames. [13] In order to bind two successive frames between time t1 and t2 with a conservation constraint system and to create an equation which has two states with path independent features for estimating large‐scale displacement motion, we integrate equation (1) from time t1 to t2 Z

t2

t1

dI ðrðt Þ; t Þ dt ¼ I ðrðt2 Þ; t2 Þ  I ðrðt1 Þ; t1 Þ  0; dt

where r(t1) and r(t2) are the position vectors at time t1 and t2. If a displacement vector field is defined by Dr = v (t2 − t1) in Figure 1, where v = v(r(t1), t1) is the average velocity between t1 and t2, then the temporal integral form of the conservation constraint named by the displaced frame difference is given by DFD ¼ I ðrðt1 Þ þ vDt; t2 Þ  I ðrðt1 Þ; t1 Þ ¼ 0;

ð3Þ

where Dt = t2 − t1. The conservation constraint equation (1) is integrated into path‐independent equation (3). We must emphasize that although equation (3) is an implicit function on the velocity field, the two path independent terms in (3) correspond to the initial and final states of motion associ-

ated with the two successive frames for this conservation system. It is clear that employing the DFD equation (3) derived by integrating between time t1 and t2 for motion estimations can achieve higher accuracy in comparison with the heat or optical flow equation (2) especially for large‐ scale displacement estimation. [14] The intensity terms at time t1 and t2 in equation (3) can be calculated from the intensity fields on the image sequence. The problem is underconstrained, however, because two unknown velocity components must be derived from a single conservation statement (3) at each of these pixel points. 2.3. Bilinear Motion Field Model [15] To solve the underconstrained problem for heat or optical flow fields by equation (3) from an image sequence, one of the efficient approaches is to expand the velocity field as bilinear approximation functions or two‐dimensional B‐Splines functions [Chen et al., 2008; Chen, 2010]. To simplify computation complexity, we adopt the bilinear motion field model in this paper. The velocity fields u(x, y) and v (x, y) are represented as two‐dimensional bilinear functions controlled by a smaller number of velocity estimates u(p, q) and v(p, q) which lie on a coarser node point in grid as shown in Figure 2. [16] In general, any two‐dimensional function can be approximated by a Lagrange’s bilinear function

3 of 14

f ð x; yÞ ¼

1 X 1 X   f p þ nx ; q þ ny Hpþnx ;qþny ð x; yÞ; ¼0 ¼0

ð4Þ

C06015

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

where the function Ha,b (x, y) is defined by   8 ðnx  x þ pÞ ny  y þ q ða ¼ p \ b ¼ qÞ > > > > > > > >   > > > ð x  pÞ ny  y þ q ða ¼ p þ nx \ b ¼ qÞ 1 < Ha;b ð x; yÞ ¼ nx ny >   > > ðn  x þ pÞð y  qÞ a ¼ p \ b ¼ q þ ny > > > x > > > > >   : ð x  pÞð y  qÞ a ¼ p þ nx \ b ¼ q þ ny ; ð5Þ

the parameters of block size nx and ny are the sampled spaces of the function f on x and y directions as shown in Figure 2, and quantized indices p and q on nodes are functions of x and y, respectively. They are defined by      x y ; ny ; fp; qg ¼ fpð xÞ; qð yÞg ¼ nx nx ny

where ⌊ ⌋ denotes an integer operator. [17] The two component velocity field on pixels in an image can be approximated by the following discrete forms of the bilinear approximation functions with first‐order continuity that holds for all Nx × Ny image globally vij ¼

1 X 1 X ¼0 ¼0

vpþnx ;qþny Hpþnx ;qþny ði; jÞ;

ð6Þ

where a function with discrete variables i and j is denoted by vij = v(i, j). In a special case, vij ≡ vpq for all indices i and j when block size parameter n is unity (n = nx = ny = 1). When the block size parameter n is greater than unity, the bilinear assumption of the velocity may exactly approach a physical velocity field, in some cases. For example, if the velocity field is constant or a linear function of the coordinates, then the assumption (6) is the velocity field exactly. [18] All off‐node velocities can be calculated by equation (6) using the on‐node velocities that are expressed as upq and vpq. The DFD equation in (3) becomes   DFDij ¼ I i þ uij Dt; j þ vij Dt; t2  Iij ðt1 Þ ¼ 0;

image sequence. The number of node points as shown in Figure 2 is given by  Nnode ¼

[20] The total number of independent velocity field with two components upq and vpq is 2 × Nnode. It is clear that this system is overconstrained because the number of DFD equations in (7) for all pixels is greater than the number of independent velocity components upq and vpq (i.e., N > 2 × Nnode) if block size nx > 1 and ny > 1. We can solve the overconstrained system using the nonlinear least squares model described in the next section to estimate velocity field. 2.4. Nonlinear Least Squares Model [21] In order to solve the overconstrained system of equations (7), we can construct a quantity to be minimized as a global cost function that covers all image scenes based on least squares principle. The cost function that is the sum of the total errors of the DFD in (7) is given by 2 ¼

X

 DFDij

i;j2Wkl

@DFDij @DFDij ; @ukl @vkl



  ¼ 0 8 nx > 1 \ ny > 1 : ð8Þ

[22] To obtain equation (8), we have used  

8 @DFDij @uij @I x; j þ vij Dt; t2

> > ¼ Dt >

< @ukl @ukl @x   x¼iþuij Dt ; > @DFDij @vij @I i þ uij Dt; y; t2

> > ¼ Dt :

@vkl @vkl @y y¼jþvij Dt 1 1 @uij @vij X X ¼ ¼ Hkl ði; jÞk;pþnx l;qþny ; @ukl @vkl ¼0 ¼0

ð9Þ

where the symbol dij is the Kronecker‐Delta symbol, and the domain of summation decreased from across all image plane to only a local region Wkl with indices k and l

i;j

[19] The DFD equations (3) and (7) are equivalent only when the block size parameters nx = 1 and ny = 1, in this case, p = i and q = j. All independent DFDij equations have only the smaller number of the independent velocities on nodes when the velocity indices i = p(i) and j = q( j). The total number of DFD equations is N = Nx × Ny for an Nx × Ny

DFD2ij ;

where i and j go over all pixels in Nx × Ny image (i 2[0, Nx − 1] \ j 2[0, Ny − 1]). Minimizing the cost function with parameters ukl and vkl as variables for given indices k and l on all node points in an image shown in Figure 2, a set of overconstrained system of equations for the estimation of the velocity is given by

X

    DFDij ¼ DFDij uij ; vij ¼ DFDij upq ; vpq :

X i;j

ð7Þ

where uij and vij are two components of velocities on each pixel with horizontal and vertical indices {i, j} for whole image. The DFD in (7) now is a function of variables of two velocity components that depend on node point velocities in (6), i.e.,

    Ny  1 Nx  1 þ1 þ1 nx ny

 DFDij

@DFDij @DFDij ; @ukl @vkl

¼

X

DFDij

i;j2Wkl

 @DFDij @DFDij ; ; @ukl @vkl

where the summation denoted in above equations is given by

4 of 14

X i;j2Wkl

¼

kþn x 1 X

lþn y 1 X

i¼knx þ1 j¼lny þ1

:

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

C06015

n = nx = ny to control the degree of the overconstrained system and robustness to noise and to obtain the velocity field from high to low resolutions of the field structures for different applications.

3. Algorithms

Figure 3. Algorithm of progressive relaxation of the overconstraint. The block size parameter n that can control the degree of overconstraint is adjusted from a larger value to a preset value.

The two independent equations in (8) on a node point can be degraded back to a single DFD equation (7) when block size nx and ny are equal to unity. [23] The equations (8) obtained by the nonlinear least squares model are a set of nonlinear system equations with all on‐node velocities upq and vpq as independent variables. The velocities ukl and vkl and their surrounded nearest neighbor velocities (off‐node nonindependent uij and vij) are implicitly contained in the summations in equation (8). To solve the nonlinear system equations (8), an iterative equation (A1) in Appendix A is formulated based on Gauss‐ Newton and Levenberg‐Marguardt methods. [24] The new approach is named as the Nonlinear Global Optimal Solution (NGOS) to differentiate from the traditional Linear GOS (LGOS) method [Chen et al., 2008]. The local DFD equations (7) of an underconstrained system are converted into a set of nonlinear simultaneous system equations in (8) by bilinear modeling velocity field and the nonlinear least squares principle. The system becomes local iterative equations of an overconstrained system with the Gauss‐Newton and Levenberg‐Marguardt methods. All on‐node velocities can be solved by the iteration procedures based on equations (A1) and their previous iterative values. For the NGOS solution, the solved velocities on nodes in equation (A1) define a continuous vector field, so that the estimated velocity field at any position in the image scene can be computed (interpolated) by equation (6). [25] The constraint or regularization of the bilinear modeling velocity field can handle both continuities and sudden discontinuities in the motion field well because of the bilinear polynomial change with C2 continuity within the nx × ny region and C1 continuity around block boundary. The expressions of the velocity field in equation (6) can approach or approximate real physical variations appropriately in both computer vision and remote sensing fields, particularly for a smaller block size. We can adjust the block size parameters

[26] The procedures start from a set of preset initial values of the velocity field; then an iteration step based on the previous iterative velocity field and the iterative equations in (A1) can be performed. Furthermore, employing a principle similar to that of the Gauss‐Seidel iteration algorithm, we (m) make use of updated values of u(m) pq and vpq on the right‐ hand side of (6) as soon as they become available during the iteration processing. [27] All initial displacement field vectors are preset to be equal to 0.01. The initial Levenberg‐Marguardt factor l is set to 0.001 and is multiplied by 10 in each step if the iteration is not converging. Computation of the temporal integral equation (3) and the partial derivatives in (A1) are shown in Appendix B. 3.1. Progressive Relaxation of the Overconstraint [28] Now, the nonlinear inverse problem of velocity estimation from an image sequence with proposed approaches leads to solve nonlinear equations of a system that may have multisolutions. For example, to estimate a velocity field within a featureless region, the velocity field may not be unique because any mobile or static cannot be detected or physically observed in the region as indicated in the MDC theorem (Appendix C). The physical properties of the unobserved motion (or static) in a featureless region are determined by a real world data. The realistic motion fields may have multipossibilities which satisfy the same equations, and the inverse problem is corresponding to solving a nonlinear system (just like multiroots for solving an equation of higher degree). [29] In order to approach a globally minimized solution using iteration equations (A1), an algorithm of progressive relaxation of the over‐constraint (PROC) that adapts a variable resolution of the velocity structure during the iterations is employed in this algorithm. An initial block size parameter n0 which is always selected to be greater than a preset value of the block size n at initial iteration, and it reaches a preset value of n at the end iteration. We regularize the velocity field by changing the block size parameter n from a larger value (higher degree of overconstraint) to a smaller one (lower degree of overconstraint) by one every Nth iterations until it approaches a preset value of n. This is called the algorithm of progressive relaxation of the overconstraint and is depicted in Figure 3. 3.2. Normalization Between Two Images [30] The methods discussed so far are based on an assumption that the surface temperature or ocean color is conservative. Most of successive satellite‐borne images are recorded in long temporal range or from different sensors in remote sensing applications. The conservation constraint of the intensity is not always satisfied for such applications. For examples, there are calibration differences and diurnal warming of the surface layer for the AVHRR images and sun angle differences for ocean color reflectance data.

5 of 14

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

C06015

Figure 4. Velocity fields. (a) Average velocity field generated by the numerical simulation model at time t1 = 18 h and t2 = 20 h. (b) Estimated velocity field by equations (A1). Background images (96 × 96) represented in false color at time t1 = 18 h and t2 = 20 h are shown in Figures 4a and 4b, respectively.

[31] In the case of a global gray level change in an image scene, there do exist the gray‐scale normalization techniques addressing the problem which were originally developed for side looking radar image analysis [Lillestrand, 1972]. Applying this technique, we normalize one with respect to other by a linear transformation in such a manner that the gray level distribution in both images is of the same mean and variance. Assuming the first image at time t = t1 is a reference image, then the normalized second image at time t = t2 is given by ^Iij ðt2 Þ ¼ 1 Iij ðt2 Þ  2 þ 1 ; 2

where I and I‐hat are the intensity recorded at time t = t2 and the normalized intensity, and si and mi are the mean and standard deviation of the intensity values recorded within unmasked regions in image at time t = ti.

satellite‐borne AVHRR images taken in the New York Bight, east of the New Jersey coast and south of Long Island, New York. 4.1. Error Measurement [33] In order to evaluate both angular and magnitude errors quantitatively for motion estimations, angular and magnitude measures of error [Chen, 2010] are used in this paper. Velocity may be written as v = (u, v, w) (assuming the component of velocity in z is w = 0 in this paper), and then the angular and magnitude errors between the correct velocity ^v and an estimate v are ! vij  ^ vij 1X D ¼ arccos





; N i;j ^ij vij v

and

2 vij

1 X vij  ^





DV ¼ ; N i;j vij

^ vij

4. Experiments [32] In this section we test performance of the proposed nonlinear inverse model and algorithms by experimental results, obtained for both computer synthetic and real world thermal images. First of all, we use the solution of a numerical simulation model as a benchmark and introduce a surface tracer field as an initial condition [see Chen et al., 2008]. Angular and magnitude measures of error are introduced in this section, and the mean values of these errors are applied to evaluate the performance of the velocity estimations. Finally, we turn to qualitative evaluations on real

where magnitude errors are dimensionless quantity. If either v = 0 or ^v = 0, we define DV = 1. If v = ^v = 0, we put DV = 0. The average angular and magnitude errors between the correct velocity ^v and an estimate v are used to evaluate the performance of the velocity estimations. 4.2. Benchmark Evaluation [34] In order to validate the proposed nonlinear inverse model and algorithms, a simulated flow field and its

6 of 14

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

C06015

Figure 5. Plots of error measurement generated by the estimator with numerical model data at time t1 = 18 h and t2 = 20 h. Average (a) angular and (b) magnitude errors versus block size for estimators with the heat or optical flow equation (linear model) and the DFD equation (nonlinear model).

advection of sea surface temperature are obtained by solving 3‐D nonlinear fluid dynamical equations and the equation for the tracer (or temperature). For the test, the temperature T is treated simply as a passive tracer with a weak diffusivity added for numerical stability (see Chen et al. [2008] for detailed descriptions). [35] The inversion of the simulated SST for surface flow is performed using the nonlinear inverse model for a range of block size. In terms of the number of block size values, the smallest block size tested has the dimension of n = nx = ny = 2 on each side, and the largest one has 20. A six‐point differentiation method without smoothing the gradient fields is used for this estimation. [36] The benchmark velocity vectors given by the numerical simulation model are shown in Figure 4a. For comparison, velocity field estimated by the NGOS are shown in Figure 4b. The false color presentation of images (96 × 96) in the background of Figures 4a and 4b are the tracer fields (or simulated SST) with scales ranging from 0 to 50 (km) in horizontal and vertical, which by the time from 18 to 20 h, has been deformed by the currents and is significantly different from its original sin(2px/L) × sin(2py/L) square cell shape. [37] The two velocity vector fields in Figure 4 are mutually similar, with each showing a distribution of eddies having diameters of order 15 km and more gently curved tendril structures with larger radii of curvature. There are three prominent eddies in the modeled tracer field (Figure 4), with centers located at ∼ (5 km, 5 km), (36 km, 5 km), and (21 km, 40 km). The estimated results (Figure 3b) capture these features qualitatively well, and represent them as well‐ defined vortices. Two examples of this are the jets of fluid

exiting the left edge of the box at y ≈ 25 km and the one at the upper right‐hand corner. [38] The above comparisons between Figure 4a and 4b have been qualitative. Quantitative measures of how well the proposed estimator can reproduce the model flow are seen in Figure 5 in which the average angular and magnitude error versus block size (n = nx = ny) are shown. Comparisons with LGOS and MCC techniques that yield full dense flow fields are shown in Table 1. [39] Both curves of angular and magnitude errors generated by the LGOS (Chen et al. [2008] using modified algorithms) as shown in Figures 5a and 5b (curves in red color) exhibit a local minimum in the vicinity of block size parameter n ≈ 7, and two competing phenomena, one at small n and another for large, are responsible for this. However, both curves of angular and magnitude errors generated by the NGOS as shown in Figures 5a and 5b decrease versus block size until block size parameter n = 2. Obviously, both curves of angular and magnitude errors by the two methods indicate that the nonlinear inverse model has much better performance than the linear inverse model for the full range of block size variation and particularly for

Table 1. Values of AAE and AME Between Velocities Inferred by the MCC, LGOS, and NGOS Methods and Numerical Model Data Method

AAE

AME

MCC LGOS NGOS

37.90° 15.09° 9.035°

1.578 0.4513 0.1497

7 of 14

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

C06015

Figure 6. The velocity vectors are superimposed on the AVHRR image pair at time 1113 and 1436 UT on 28 February 2004 (false color representation). (a) Average of the CODAR velocity field. (b) Estimated velocity field obtained from the NGOS.

the higher resolution of the velocity structure (small value of block size). 4.3. Application to AVHRR Images [40] The sea surface temperature (SST) fields are collected by the satellite‐borne advanced very high resolution radiometer AVHRR. The intended application of the new estimator with nonlinear inverse model is to obtain accurate estimation of the ocean surface velocity from AVHRR image sequences. We estimated a velocity field from four NOAA satellite images taken in the New York Bight, east of the New Jersey coast and south of Long Island, New York. These data were taken on 28 February 2004 at times ti = {1113, 1436, 1902, 2232} UT, and 25 May 2007 at time ti = {0721, 1046, 1518} UT, respectively. We calculated velocities from each of the two image pairs at contiguous times. The pixel resolutions for the images are 1.15 (km) in the north‐south and east‐west directions for the first two image pairs and 1.008 (km) for the second two image pairs in the same directions, respectively. The temporal separations between images are thus Dti ≡ ti+1 − ti = {3.38, 3.5, 3.42, 4.53} h, respectively. [41] Level 2 sea surface temperature data which have been processed to remove sun glint, atmospheric aberrations, and geometric anomalies are used for these experiments. An additional important improvement is our use of dense, high‐ resolution surface current maps available from the Rutgers University Coastal Ocean Observing Laboratory Coastal Ocean Radar (CODAR) network. This provides ocean sur-

face current velocity fields with finer spatial and temporal resolution over an ample area of several hundred square kilometers with which to compare velocities from the proposed method analyses qualitatively. We compare the results of velocity fields by CODAR measurements and estimated by the nonlinear inverse model over the two New York Bight image sequences in small regions where corroborating shore‐based Doppler radar coverage exists. [42] The continental shelf and contamination regions in AVHRR images have been masked and the second one in the pair was normalized by the method proposed in section 3.2. The northern portion of the image (Figures 7a and 7b) shows some atmospheric water haze contamination in the dark streaks. In order to apply the nonlinear inverse model, we first mask the clouds and land, consistent with the strategy discussed in Appendix B2 to detect and segment the mask regions. The resulting masked image has the form shown in Figures 6–9. [43] We implemented the method with central differences for differentiation. The smoothed gradient fields by a Gaussian low‐pass filter with standard deviations from 0.25 to 1.125 pixels in space has been used for the AVHRR velocity estimates. Choosing a larger block size for robustness to noise is helpful to improve the performance of the AVHRR estimates. The range of the block size variation tested in the AVHRR applications is from 15 to 40. [44] Since the direct velocity measures have some source of error in the measurements, there are no the direct velocity measures can be considered a realistic ground truth velocity.

8 of 14

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

Figure 7. The velocity vectors are superimposed on the AVHRR image pair at time 1902 and 2232 UT on 28 February 2004 (false color representation). (a) Average of the CODAR velocity field from 1900 to 1600. (b) Estimated velocity field obtained from the NGOS.

Figure 8. The velocity vectors are superimposed on the AVHRR image pair at time 0721 and 1046 UT on 25 May 2007 (false color representation). (a) Average of the CODAR velocity field from 0700 to 1100. (b) Estimated velocity field obtained from the NGOS. 9 of 14

C06015

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

C06015

Figure 9. The velocity vectors are superimposed on the AVHRR image pair at time 1046 and 1518 UT on 25 May 2007 (false color representation). (a) Average of the CODAR velocity field from 1000 to 1600. (b) Estimated velocity field obtained from the NGOS.

As a reference, an excellent opportunity exists for a detailed comparison between LGOS and NGOS velocity fields and a “ground truth” realization of the same velocity field obtained from the Rutgers University Coastal Ocean Dynamics Radar (CODAR). This CODAR array has a resolution of 6 km, and velocity magnitude and direction accuracies of ±0.04 cm/s and ±1°, respectively [Kohut et al., 2004]. We plot the CODAR and corresponding local NGOS velocity fields for all four time intervals in Figures 6–9. [45] In Figures 6 and 7, we show two vector plots of the estimated velocity fields from the AVHRR image sequences from 1113 to 1436 UT and 1902 to 2232 UT on 28 February 2004. As observed in Figures 6 and 7, there is a flow from northeast toward southwest in this time interval. The other two AVHRR estimate vector fields from 0721 to 1046 and 1046 to 1518 UT, 25 May 2007 have been superimposed on the second AVHRR images (340 × 306) in each pair as shown in Figures 8a, 8b, 9a, and 9b. [46] The CODAR in Figures 6–9, and flows estimated by the nonlinear inverse model agree qualitatively, although the details of each may differ in specific locations. In general, there is a flow from the north toward the south for all time intervals. [47] We first compare the velocity fields by CODAR and NGOS, and then a quantitative comparison is made between the nonlinear model predictions and CODAR velocities by comparing their magnitudes and directions. Linear model and MCC technique comparisons are also

provided for references. To do this, we still use the angular and magnitude errors which are defined in section 4.1 to evaluate the errors of these estimations. The results of calculating the angular and magnitude errors are shown in Tables 2 and 3 for velocity fields estimated by the MCC, LGOS, and NGOS for the time intervals from 1113 to 1436 UT on 28 February 2004 and 0721 to 1046 UT on 25 May 2007. The CODAR velocity fields are average values of the fields over time intervals. Comparing angular and magnitude errors by the MCC, LGOS, and NGOS, the results indicate that the nonlinear inverse model provides significant improvement over the linear inverse model for real AVHRR data sets.

5. Conclusion [48] Velocity estimation from an image sequence which is observed or recorded by different physical sensors is a well‐ known general inverse problem in the fields of computer vision, geosciences, and remote sensing. In this paper a nonlinear inverse model has been created for estimating velocity field under conservation constraint of the heat or optical flow. A linear differential form of heat or optical flow equation is replaced by a nonlinear temporal integral form of the conservation constraint equation. Unlike the heat or optical flow equation that holds for small displacement motion, the initial and final states of motion terms in the nonlinear equation are associated with the

10 of 14

C06015

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

Table 2. Values of AAE and AME Between Velocities Inferred by the MCC, LGOS, and NGOS Methods With Data at Time 1113–0436 UT on 28 February 2004 and CODAR Method

AAE

AME

MCC LGOS NGOS

31.59° 33.19° 29.93°

0.9112 1.063 0.9362

given configurations in the two successive frames at time t1 and t2. The iterative equations with Gauss‐Newton and Levenberg‐Marguardt algorithms have been formulated based on the nonlinear equations, the velocity field modeling, and a nonlinear least squares model. Several efficient and refined algorithms for the new velocity estimator have been developed. [49] The solution to a numerical simulation model is used as a benchmark to exam the new estimator. Both angular and magnitude error measurements based on the synthetic surface heat flow from the numerical simulation model demonstrate that the performance of the new approach with the nonlinear model is much better than the results of using a linear model of heat or optical flow equation. Four sequences of NOAA AVHRR images taken in the New York Bight fields is also used to demonstrate the performance of the nonlinear inverse model, and the estimated velocity fields are compared with those measured with the Coastal Ocean Dynamics Radar (CODAR) array. The experimental results indicate that the nonlinear inverse model provides significant improvement over the linear inverse model for real AVHRR data sets. [50] The computational complexities of the inverse problem are greatly simplified by the new iteration approach with equations (A1) for full dense velocity estimation. The solution with iteration equations (A1) based on Gauss‐Newton, Levenberg‐Marguardt, and the PROC algorithms converge rapidly to a global minimum solution. Since both linear and nonlinear models use iteration approaches to solve these linear or nonlinear systems of equations and the iteration equations have been formulated in the paper for easy implementation, there is no significant difference of computation cost between the linear or nonlinear model in a single iteration. To compute the full dense velocity field between two images of size 340 × 305 on a 3.0 GHz Intel single CPU PC, the LGOS and NGOS with the iteration approaches take about 8 and 19 s, respectively. [51] The proposed approach using smaller number of on‐ node velocity components to interpolate full density velocity field can have potential applications in both computer vision and remote sensing fields. The powerful feature of robustness to noise by adjusting the block size has been demonstrated for estimating velocity field using AVHRR image sequences in remote sensing applications. A large compression ratio with high fidelity quality of motion pictures requires large‐scale displacement and long temporal range motion estimation for compression of image sequences for efficient transmission. Using almost the same number of the velocity vectors in a fixed block size for both the NGOS and the block‐based matching algorithms [Glazer et al., 1983; Gharavi and Mills, 1990] which is currently adopted standard for video coding, the NGOS can provide much higher

accuracy performance than the block‐based model (the velocity field with C0 continuity obtained by local searching strategies).

Appendix A: Iteration Equations [52] To solve the nonlinear system equations (8), we expand the DFD in Taylor series using Gauss‐Newton method ðmþ1Þ

DFDij

ðmÞ

 DFDij þ

ðm Þ @DFDij

ðmþ1Þ  vkl 

ðm Þ @ukl  ðm Þ vkl ;

ðmþ1Þ

ukl

ðmÞ

 ukl



ðmÞ

þ

@DFDij ðm Þ @vkl

where m is an iteration index, and ðmÞ

DFDij

 ¼ DFDij uðpqmÞ ; vðpqmÞ :

[53] Substituting DFD expansion into equation (8), we find the following iterative equations of two component velocity ukl and vkl for all indexes k and l on node points vkl

 ðmÞ ðmÞ 1 ðmÞ ¼ vkl  Akl Bkl ;

X

@DFDij

ðmþ1Þ

ðA1Þ

where 0 ðmÞ Akl

ðmÞ

!2

B ð þ 1 Þ B ðmÞ B @ukl i;j2Wkl ¼B B X @DFDðijmÞ @DFDðijmÞ B @ ðmÞ ðmÞ @vkl i;j2Wkl @ukl

1 X @DFDðijmÞ @DFDðijmÞ C C ðmÞ ðmÞ C @vkl i;j2Wkl @ukl !2 C C; X @DFDðijmÞ C A ð þ 1Þ ðmÞ @vkl i;j2Wkl

and 0 ðmÞ Bkl

X

ðmÞ DFDij

ðmÞ

@DFDij

1

B C ðmÞ C B @ukl B i;j2Wkl C ¼B ; ðmÞ C B X C @DFD ij A ðmÞ @ DFDij ðmÞ @vkl i;j2Wkl

where l ≥ 0 is a Levenberg‐Marguardt factor that is adjusted at each iteration to guarantee that matrix A is positive definite. A smaller value of the factor l can be used, bringing the algorithm closer to the Gauss‐Newton method with second order converging. This Levenberg‐Marguardt method can improve converge properties greatly in practice

Table 3. Values of AAE and AME Between Velocities Inferred by the MCC, LGOS, and NGOS Methods With Data at Time 0721–1046 UT on 25 May 2007 and CODAR Method

AAE

AME

MCC LGOS NGOS

27.21° 22.84° 20.69°

0.9881 0.6278 0.6185

11 of 14

C06015

and has become the standard of nonlinear least squares routines.

Appendix B B1.

Computation of the Temporal Integral Equation

[54] The DFD equation on each pixel consists of a intensity Iij(t1) and a motion‐compensated prediction I(i + uijDt, j + vijDt, t2) with variables that may be out of the position on pixels in an image. In order to calculate the motion‐compensated prediction, the general bilinear interpolation function in (4) is utilized for this computation as follows: DFDij ¼

1 X 1 X

  Ipþ;qþ ðt2 ÞHpþ;qþ i þ uij Dt; j þ vij Dt

¼0 ¼0

 Iij ðt1 Þ;

where the function H in (5) is evaluated when nx = ny = 1, and {p, q} = {p(i + uijDt), q(j + vijDt)}. B2.

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

Computation of the Partial Derivatives

Appendix C

[55] Evaluation of the partial derivatives in (A1) with respect to velocity on node requires computation of spatial gradient as deduced in equations (9). In order to improve computation accuracy of numerical differentiation from the discrete set of image, we implemented the partial differential method with six‐point central differences for differentiation (with mask coefficients {1, −8, 0, 8, −1}/ 12 or {‐1, 9, −45, 0, 45, −9, 1}/ 60). Moreover, a spatial Gaussian low‐pass filter with standard deviation about one pixel in space is used to smooth the gradient fields. [56] The gradient field smoothness processes can improve the performance of the velocity estimation especially for real world image data because a motion can be observed and detected only if there are enough textures in an interested region. This conclusion based on physical observation is confirmed by a new motion detection criterion (MDC). The MDC theorem states that a two‐dimensional motion in an image sequence can be detected if and only if MDC ¼

@I ð x; y; t1 Þ @I ð x; y; t2 Þ @I ð x; y; t1 Þ @I ð x; y; t2 Þ  6¼ 0: @x @y @y @x

8 > >
> : It ð x þ Dx; y þ Dy; t2 Þ þ Dx Ix ð x þ Dx; y þ Dy; t2 Þ þ Dy Iy ð x þ Dx; y þ Dy; t2 Þ ¼ 0 Dt Dt It ð x; y; t1 Þ þ

where Dt = t2 − t1, Dx = x(t2) − x(t1) ≈ uDt, and Dy = y(t2) − y(t1) ≈ vDt. Solving the ratio of two components of the displacement vector and time variation, we have  1  Iy ð x; y; t1 ÞIt ð x þ Dx; y þ Dy; t2 Þ  Iy ð x þ Dx; y þ Dy; t2 ÞIt ð x; y; t1 Þ Dx B Dt C Ix ð x þ Dx; y þ Dy; t2 ÞIt ð x; y; t1 Þ  Ix ð x; y; t1 ÞIt ð x þ Dx; y þ Dy; t2 Þ C B @ Dy A ¼ Ix ð x; y; t1 ÞIy ð x þ Dx; y þ Dy; t2 Þ  Iy ð x; y; t1 ÞIx ð x þ Dx; y þ Dy; t2 Þ : Dt 0

12 of 14

ðC2Þ

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

[61] If time t2 is close enough to t1 (Dt → 0), the two‐ component velocity v is given by 0

1 Dx B Dt C C v ¼ lim B Dt!0@ Dy A Dt 1 0 dIy ð x; y; t1 Þ dIt ð x; y; t1 Þ  I I ð x; y; t Þ ð x; y; t Þ y 1 t 1 C B dt dt C B @ dIx ð x; y; t1 Þ dIt ð x; y; t1 Þ A It ð x; y; t1 Þ  Ix ð x; y; t1 Þ dt dt ; ¼ dIy ð x; y; t1 Þ dIx ð x; y; t1 Þ Ix ð x; y; t1 Þ  Iy ð x; y; t1 Þ dt dt

ðC3Þ

1Þ where df ðx;y;t is a total derivative function. Both numerator dt and denominator in equation (C3) are denoted by differential forms. Obviously, equation (C3) has a solution for the velocity v if and only if the denominator in the equation (C2) is not equal to zero. Then, we have proved

dIy ð x; y; t1 Þ dIx ð x; y; t1 Þ ¼  Iy ð x; y; t1 Þ dt dt ; Ix ð x; y; t1 ÞIy ð x þ dx; y þ dy; t2 Þ  Iy ð x; y; t1 ÞIx ð x þ dx; y þ dy; t2 Þ 6¼ 0 dt

Ix ð x; y; t1 Þ

or MDC ¼ Ix ð x; y; t1 ÞIy ð x; y; t2 Þ  Iy ð x; y; t1 ÞIx ð x; y; t2 Þ 6¼ 0:

In last step we have employed the bistates of the transition conditions between (observed) static (dx ≡ 0 \ dy ≡ 0 8 dt ≠ 0) and mobile (dx ≠ 0 [ dy ≠ 0 8 dt ≠ 0) flow. [62] The two gradient vectors in equation (C1) must be evaluated at the same fixed position x and y, and two different times t1 and t2 when t2 is close enough to t1 but t2 ≠ t1. Understanding the processes of limits when the time increment approaches zero is very important for the analysis of the motion. We have to emphasize that two optical or heat flow equations at different times t1 and t2, not one, are used to solve the problem, even if Dt approaches zero. [63] Equation (C3) holds for infinitesimal displacement motion and it is exactly true in mathematics. Our goal is not applying equation (C3) to estimate velocity because it is impossible to calculate all total differential functions. However, the MDC as deduction of the equation (C3) provides a powerful criterion for motion detection. Actually, even though the Dt in equation (C2) is not small, the MDC still holds because the spatial variations must be equal to zero (Dx ≡ 0 \ Dy ≡ 0 8 Dt ≠ 0) for static optical flow detection. The theorem (C1) provides a novel criterion for motion detection and segmentation based on two time varying frames but not for motion estimation. [64] Acknowledgments. This research work was supported by the Office of Naval Research through the project WU‐4279‐01 at the Naval Research Laboratory. I am grateful to J. Kohut (Rutgers University) for supplying the CODAR velocity field used in this work.

References Alberotanza, L., and A. Zandonella (2004), Surface current circulation estimation using NOAA/AVHRR images and comparison with HF radar current measurements, Int. J. Remote Sens., 25(7), 1357–1362, doi:10.1080/ 01431160310001592292.

C06015

Barron, J. L., D. J. Fleet, and S. S. Beauchemin (1994), Performance of optical flow techniques, Int. J. Comput. Vis., 12(1), 43–77, doi:10.1007/ BF01420984. Black, M. J., and P. Anandan (1996), The robust estimation of multiple motions: Parametric and piecewise smooth flow fields, Comput. Vis. Image Underst., 63(1), 75–104, doi:10.1006/cviu.1996.0006. Borzelli, G., G. Manzella, S. Marullo, and R. Santoleri (1999), Observations of coastal filaments in the Adriatic Sea, J. Mar. Syst., 20(1–4), 187–203, doi:10.1016/S0924-7963(98)00082-7. Bowen, M. M., W. J. Emery, J. L. Wilkin, P. C. Tildesley, I. J. Barton, and R. Knewtson (2002), Extracting multiyear surface currents from sequential thermal imagery using the maximum cross‐correlation technique, J. Atmos. Oceanic Technol., 19(10), 1665–1676, doi:10.1175/15200426(2002)0192.0.CO;2. Bruhn, A., J. Weickert, and C. Schnorr (2005), Lucas/Kanade Meets Horn/ Schunck: Combining local and global optic flow methods, Int. J. Comput. Vis., 61(3), 211–231, doi:10.1023/B:VISI.0000045324.43199.43. Chen, W. (2010), A global optimal solution with higher order continuity for the estimation of surface velocity from infrared images, IEEE Trans. Geosci. Remote Sens., 48(4), 1931–1939, doi:10.1109/TGRS. 2009.2037316. Chen, W., R. P. Mied, and C. Y. Shen (2008), Near‐surface ocean velocity from infrared images: Global optimal solution to an inverse model, J. Geophys. Res., 113, C10003, doi:10.1029/2008JC004747. Crocker, R. I., D. K. Matthews, W. J. Emery, and D. G. Baldwin (2007), Computing coastal ocean surface currents from infrared and ocean color satellite imagery, IEEE Trans. Geosci. Remote Sens., 45(2), 435–447, doi:10.1109/TGRS.2006.883461. Domingues, C. M., G. A. Gonzalves, R. D. Ghisolfi, and C. A. E. García (2000), Advective surface velocities derived from sequential infrared images in the southwestern Atlantic Ocean, Remote Sens. Environ., 73(2), 218–226, doi:10.1016/S0034-4257(00)00096-1. Dransfeld, S., G. Larnicol, and P. Y. Le Traon (2006), The potential of the maximum cross‐correlation technique to estimate surface currents from thermal AVHRR global area coverage data, IEEE Geosci. Remote Sens. Lett., 3(4), 508–511, doi:10.1109/LGRS.2006.878439. Emery, W. J., A. C. Thomas, M. J. Collins, W. R. Crawford, and D. L. Mackas (1986), An Objective method for computing advective surface velocities from sequential infrared satellite images, J. Geophys. Res., 91(C11), 12,865–12,878, doi:10.1029/JC091iC11p12865. Emery, W. J., C. Fowler, and A. Clayson (1992), Satellite‐image‐derived Gulf Stream currents compared with numerical model results, J. Atmos. Oceanic Technol., 9(3), 286–304, doi:10.1175/1520-0426(1992) 0092.0.CO;2. Flores, M. M., H. Maitre, and F. Parmiggiani (1995), Sea‐ice velocity fields estimation on Ross Sea with NOAA–AVHRR, IEEE Trans. Geosci. Remote Sens., 33(5), 1286–1289, doi:10.1109/36.469494. Garcia, C. A. E., and I. S. Robinson (1989), Sea surface velocities in shallow seas extracted from sequential coastal zone color scanner satellite data, J. Geophys. Res., 94(C9), 12,681–12,691, doi:10.1029/ JC094iC09p12681. Gharavi, H., and M. Mills (1990), Block matching motion estimations: New results, IEEE Trans. Circuits Syst., 37(5), 649–651, doi:10.1109/ 31.55010. Glazer, F., et al. (1983), Scene matching by hierarchical correlation, paper presented at Computer Vision and Pattern Recognition Conference, Inst. of Electron. and Electr. Eng., Washington, D. C. Holland, J. A., and X. H. Yan (1992), Ocean thermal feature recognition, discrimination, and tracking using infrared satellite imagery, IEEE Trans. Geosci. Remote Sens., 30(5), 1046–1053, doi:10.1109/36.175339. Horn, B., and B. Shunck (1981), Determining optical flow, Artif. Intell., 17(1–3), 185–203, doi:10.1016/0004-3702(81)90024-2. Kohut, J. T., S. M. Glenn, and R. J. Chant (2004), Seasonal current variability on the New Jersey inner shelf, J. Geophys. Res., 109, C07S07, doi:10.1029/2003JC001963. Kamachi, M. (1989), Advective surface velocities derived from sequential images for rotational flow field: Limitations and applications of maximum cross‐correlation method with rotational registration, J. Geophys. Res., 94(C12), 18,227–18,233, doi:10.1029/JC094iC12p18227. Kelly, K. A. (1989), An inverse model for near‐surface velocity from infrared images, J. Phys. Oceanogr., 19(12), 1845–1864, doi:10.1175/15200485(1989)0192.0.CO;2. Leese, J. A., and C. S. Novak (1971), An automated technique for obtaining cloud motion from geosynchronous satellite data using cross correlation, J. Appl. Meteorol., 10(1), 118–132, doi:10.1175/1520-0450(1971) 0102.0.CO;2. Lillestrand, R. (1972), Techniques for change detection, IEEE Trans. Comput., 21(7), 654–659.

13 of 14

C06015

CHEN: NONLINEAR MODEL FOR VELOCITY ESTIMATION

Lucas, B. D. (1984), Generalized image matching by the method of differences, Ph.D. thesis, Carnegie Mellon Univ., Pittsburg, Pa. Lucas, B. D., and T. Kanade (1981), An iterative image registration technique with an application to stereo vision, paper presented at Image Understanding Workshop, Def. Adv. Res. Proj. Agency, Arlington, Va. Marcello, J., F. Eugenio, F. Marqués, A. Hernández‐Guerra, and A. Gasull (2008), Motion estimation techniques to automatically track oceanographic thermal structures in multisensor image sequences, IEEE Trans. Geosci. Remote Sens., 46(9), 2743–2762, doi:10.1109/TGRS.2008.919274. Matthews, D. K., and W. J. Emery (2009), Velocity observations of the California Current derived from satellite imagery, J. Geophys. Res., 114, C08001, doi:10.1029/2008JC005029. Mercatini, A., A. Griffa, L. Piterbarg, Z. Zambianchi, and M. G. Magaldi (2010), Estimating surface velocities from satellite data and numerical models: Implementation and testing of a new simple method, Ocean Modell., 33, 190–203, doi:10.1016/j.ocemod.2010.01.003. Moulin, A. (2004), A Comparison of Two Methods Measuring Surface Coastal Circulation: CODAR Versus Satellite Estimation Using the Maximum Cross‐Correlation Technique, Fla. Inst. of Technol, Melbourne, Fla. Papenberg, N., A. Bruhn, T. Brox, S. Didas, and J. Weickert (2006), Highly accurate optic flow computation with theoretically justified warping, Int. J. Comput. Vis., 67(2), 141–158, doi:10.1007/s11263-005-3960-y.

C06015

Prasad, J. S., A. S. Reajawat, Y. Pradhan, O. S. Chauhan, and S. R. Nayak (2002), Retrieval of sea surface velocities using sequential ocean color monitor data, Proc. Ind. Acad. Sci. Earth Planet. Sci., 111(3), 189–195. Tokmakian, R., P. T. Strub, and J. McClean‐Padman (1990), Evaluation of the maximum cross correlation method of estimating sea surface velocities from sequential satellite images, J. Atmos. Oceanic Technol., 7(6), 852–865, doi:10.1175/1520-0426(1990)0072.0.CO;2. Wu, Q. X. (1995), A correlation‐relaxation‐labeling framework for computing optical flow‐template matching from a new perspective, IEEE Trans. Pattern Anal. Mach. Intell., 17(9), 843–853, doi:10.1109/ 34.406650. Wu, Q. X., and D. Pairman (1995), A relaxation labeling technique for computing sea surface velocities from sea surface temperature, IEEE Trans. Geosci. Remote Sens., 33(1), 216–220, doi:10.1109/36.368206. Wu, Q. X., D. Pairman, S. J. McNeill, and E. J. Barnes (1992), Computing advective velocities from satellite images of sea surface temperature, IEEE Trans. Geosci. Remote Sens., 30(1), 166–176, doi:10.1109/ 36.124227. W. Chen, Remote Sensing Division, Naval Research Laboratory, Code 7233, 4555 Overlook Ave., Washington, DC 20375, USA. (wei.chen@ nrl.navy.mil)

14 of 14

Suggest Documents