284
IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008
Estimating Locations of Quantum-Dot-Encoded Microparticles From Ultra-High Density 3-D Microarrays Pinaki Sarder*, Student Member, IEEE, and Arye Nehorai, Fellow, IEEE
Abstract—We develop a maximum likelihood (ML)-based parametric image deconvolution technique to locate quantum-dot (q-dot) encoded microparticles from three-dimensional (3-D) images of an ultra-high density 3-D microarray. A potential application of the proposed microarray imaging is assay analysis of gene, protein, antigen, and antibody targets. This imaging is performed using a wide-field fluorescence microscope. We first describe our problem of interest and the pertinent measurement model by assuming additive Gaussian noise. We use a 3-D Gaussian point-spread-function (PSF) model to represent the blurring of the widefield microscope system. We employ parametric spheres to represent the light intensity profiles of the q-dot-encoded microparticles. We then develop the estimation algorithm for the single-sphere-object image assuming that the microscope PSF is totally unknown. The algorithm is tested numerically and compared with the analytical Cramér-Rao bounds (CRB). To apply our analysis to real data, we first segment a section of the blurred 3-D image of the multiple microparticles using a -means clustering algorithm, obtaining 3-D images of single-sphere-objects. Then, we process each of these images using our proposed estimation technique. In the numerical examples, our method outperforms the blind deconvolution (BD) algorithms in high signal-to-noise ratio (SNR) images. For the case of real data, our method and the BD-based methods perform similarly for the well-separated microparticle images. Index Terms—Fluorescence microscope, maximum likelihood estimation, microparticle, q-dot, 3-D microarray.
I. INTRODUCTION E develop a parametric image deconvolution technique based on a maximum likelihood (ML) approach to locate quantum-dot (q-dot)-encoded microparticles from three-dimensional (3-D) images of ultra-high density 3-D microarrays. The estimated microparticle locations are useful as prior information for any next stage imaging analysis with the microarray, such as target quantification. In 3-D microarrays [1], a large number of reagent-coated and q-dot-embedded microparticles are randomly placed in each
W
Manuscript received March 10, 2008; revised June 05, 2008. Current version published February 06, 2009. This work was supported in part by the NSF under Grants CCR-0330342 and CCR-0630734, in part by the NIH under Grant 1T90 DA022871, and in part by the Mallinckrodt Institute of Radiology, Washington University. Asterisk indicates corresponding author. *P. Sarder is with the Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130 USA (e-mail:
[email protected]). A. Nehorai is with the Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA (e-mail:
[email protected]). Digital Object Identifier 10.1109/TNB.2008.2011861
array spot to capture multiple targets. The targets are antigens1 and proteins [1], [2]. The imaging is performed using a standard widefield fluorescence microscope. The microscope imaging properties distort the 3-D object image [3]. We use a parametric 3-D Gaussian point-spread-function (PSF) model to represent the widefield microscope blurring. The microscope output signal is represented by the light from the spherically shaped objects distorted by the microscope aberration. The CCD detector2 attached to the microscope functions as a photon counter, introducing Poisson noise to the output signal. In addition, analog-to-digital (A/D) conversion noise is introduced into the measurement at the CCD detector output. We approximate the Poisson noise—corrupted object signal blurred by the microscope PSF using a signal with additive Gaussian noise. This approximation is justified because the light signal from q-dots are captured with a high signal-to-noise ratio (SNR) during the experimental data collection [4], [5]. The microscope- and measurement-noise problems have to be solved in order to make any qualitative or quantitative analysis with the distortion-free object images [6]. In the current paper, we develop a ML-based deconvolution technique to locate the positions of q-dot-encoded microparticles from 3-D microarray images. In microscopy imaging, ML-based algorithms are useful for estimating specific information of interest (e.g., microparticles’ centers and their q-dot intensity levels) using parametric formulations when the object model is simple, such as a sphere in our case. In these simple cases, the objects can be easily modeled using simple parametric forms with the unknown information of interest as parameters. Then, a problemspecific deconvolution algorithm can be developed using popular ML or least-squares-based techniques instead of using commercial deconvolution software, which is costly and time intensive to run. We compare the performance of our proposed algorithm with those of the conventional blind-deconvolution (BD) algorithm as embedded in MATLAB [8] and the parametric blind-deconvolution (PBD) [7] method using numerical examples. We select these two reference algorithms as representatives of many existing microscopy image deconvolution methods. These two algorithms, like many other conventional techniques, are developed in general for arbitrary object shapes. Hence, the existing methods are often very ad hoc. In contrast, in our application, 1An antigen is a substance that stimulates an immune response, especially the production of antibodies. 2Charged coupled device, a light-sensitive chip used for image gathering.
1536-1241/$25.00 © 2008 IEEE Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS
285
a realistic prior knowledge of the object shape is available. We emphasize the fact that it is possible to employ such prior knowledge in the image deconvolution algorithm. As a consequence, the deconvolution analysis becomes a pure parameter estimation problem and it no longer needs any kind of blind approach to solve. In this paper, we present a simple parametric ML-based estimation framework for deconvolution microscopy of simple object shape images. Our proposed analysis is analytically and numerically very tractable, which is unlike the PBD algorithm. See Section V for more details. In our numerical examples, our method significantly outperforms the other two methods in high SNR; BD performs the worst. Our algorithm achieves the Cramér-Rao bound (CRB) for sufficiently large data, as should be expected for the ML estimation methods; the other two do not. Our algorithm performs better since it contains prior information of the object shape (spherical), whereas the other methods (BD and PBD) do not have that flexibility. We compare the performance of our algorithm with the BD algorithm using real-data images. We observe that the BD-based method performs as well as our proposed method for well-separated micropartcle images, whereas the performances differ for the microparticles that are very close to each other. This result is the motivation for calibrating the distance between microparticles in our future research in order to improve the estimation accuracy.
II. PROBLEM DESCRIPTION In this section we discuss the 3-D microarray with q-dot-embedded microparticles [4], [5], its image-acquisition procedure, our nomenclature convention, and the motivation for our current work. A. Target-Capturing Process In the 3-D ultra-high-density microarray as proposed in [4] and [5], microparticles (3–4 m in diameter) coated with capture reagents are immobilized in an array (see Fig. 1(a)). Each microparticle is coated with antibodies3 or antibody fragments as receptors that bind specific agents. The identities of the antibodies on the microparticles are determined by interrogating the semiconductor q-dot bar code [9], [10], enabling identification of the microparticles among a host of others. Q-dots do not saturate or photo-disintegrate as organic fluorophores do under intense illumination. This quality enhances the detection sensitivity greatly and produces a high SNR image at the detector output. The ensemble with different receptors is shown schematically in Fig. 1(a), with each color representing a specific code assigned to a given receptor [4], [5]. During the sensor operation, an input fluid stream with targets is passed tangentially through the sensor using an appropriate microfluidic technique. The targets encounter all the microparticles. The tortuous flow geometry leads to intimate contact between targets and receptors. Therefore, if an intended target is 3An antibody is a large Y-shaped protein used by the immune system to identify and neutralize foreign objects such as bacteria and viruses. Each antibody binds to a specific antigen unique to its target.
Fig. 1. (a) A microarray ensemble containing 25 spots; each spot contains a large number of microparticles coated with capture reagents. (b) A target molecule is sensed by being captured on an optically encoded microparticle; photoluminescent signals are generated from the microparticle and nanoparticle q-dots.
contained in the input fluid stream, it binds to the microparticle surface and, given sufficient exposure, accumulates to a detectable level [4], [5]. We present a schematic diagram of the target-capturing process in Fig. 1(b). A sector of a polystyrene microparticle contains an optical bar code of q-dots that luminesce at a distinct set of wavelengths (e.g., red light wavelength in Fig. 1(b)). The microparticle surface contains receptor molecules. Potential target molecules bind to the receptor molecule at active sites on its surface. The binding of the target molecules to the microparticle surface is detected via additional receptors decorating the surface of polystyrene nanoparticles containing q-dots that luminesce at unique wavelengths (e.g., blue light wavelength in Fig. 1(b)) that are shorter than the q-dot wavelengths of the microparticles. During the sensor operation, a cocktail of nanoparticles is periodically released into the input fluid stream. The nanoparticles then bind to the target molecules that were earlier bound to the microparticles. This process results in site-specific enhancements of the luminescence at a specific wavelength [4], [5] and hence greatly increases the detection sensitivity of the target molecules on the microparticles. Example: We illustrate a simple example in Fig. 2. Two microparticles are coded with q-dots of emission wavelengths and . The orange- and red-colored microparticles with q-dots having emission wavelengths and contain antibody A and antibody B, respectively, as receptors. The microparticles can be
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
286
identified and localized from the luminescence corresponding to their respective q-dot light signals at the respective wavelengths. The target solution contains antigen A. The blue-colored emission wavelength, nanoparticles contain q-dots with and . Each which is different from and smaller than nanoparticle contains either antibody A or B as its receptors. Since the targets are present only on the microparticle containing antibody A, the nanoparticles bind only there. All the q-dots produce light upon excitation of the whole ensemble is observed using UV light. The q-dot light at wavelength wherever the microparticle with antibody A is present, confirming the presence of antigen A and absence of antigen B in the target solution. This light signal also gives the amount of antigen A in the target solution. and identify and In summary, signals at wavelengths localize the corresponding microparticles. The target molecules are quantified from the q-dot signal at wavelength . B. Imaging The 3-D image of the specimen is acquired from a series of two-dimensional (2-D) images by focusing a computer-controlled epi-fluorescence microscope at different planes of the ensemble structure [6], [11]–[13]. A cooled CCD records images at multiple wavelengths using a liquid-crystal tunable filter. Cooled CCD introduces almost negligible dark detector noise in the acquired image [4]. In this paper, we assume that a significant amount of the distortion in the output image is characterized by (a) the microscope 3-D PSF, (b) Poisson noise introduced by the CCD detector, and (c) additive A/D conversion noise introduced at the CCD detector output. C. Nomenclature The 3-D image of the specimen is acquired along the optical direction from a series of 2-D focal-plane (also called radialplane) images. The optical direction is defined along the z axis in the Cartesian coordinate system. Any direction parallel to the focal plane is denoted as a radial direction. We show these concepts in Fig. 3(a) and the meridional section of the acquired 3-D image in Fig. 3(b). The meridional section is defined as a plane along the optical axis (perpendicular to the focal plane) passing through the origin. D. Motivation In Fig. 4, we present the bar-code-q-dot-light-intensity image on the focal plane of reference at 0 of an assembly of microparticles (without targets) from a microarray spot. We focus on estimating the microparticle locations in this image. This experiment was performed with each microparticle bar code containing single type of q-dots of similar amount. In general, the 3-D microarray setup contains microparticle bar codes with multiple types of q-dots of varying amounts. Our proposed method can estimate the microparticle centers and their intensity levels in this general setup [4], [5]. However, in this paper, we are considering only the case of the experimental setup where each microparticle contains q-dots of same amount and type. Presence of the varying amount and multiple type of q-dots does not change the performance of our proposed method.
IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008
The estimated locations of the microparticles are useful as prior information for any next-stage imaging analysis with this 3-D microarray, such as target quantification. The targets are quantified from the q-dot light signals of the nanoparticles. In the quantification analysis, the q-dot signal profile of the nanoparticles on each microparticle forms a spherical shell like shape. Here, the inverse equation requires to estimate the microparticles’ locations as part of the estimation process. A prior knowledge of the shells’ centers (which are the same as the corresponding microparticle center locations) greatly enhances the estimation speed of the target quantification. III. MEASUREMENT MODEL In this section, we first discuss our proposed single-sphere object model for the q-dot light intensity profile of the microparticles. Then we discuss a 3-D Gaussian PSF model for a widefield fluorescence microscope. Finally, we present the measurement model for estimation analysis. A. Single-Sphere Object Model We model the bar-code-q-dot-light-intensity profile of each microparticle at a specific wavelength using parametric spheres as follows:
(1) where denotes the bar-code-q-dot-lightintensity profile of a single microparticle at a specific wave; is the unknown intensity level, length for which is proportional to the number of specific q-dots present and are the unknown inside the microparticle; and center and known radius of the microparticle, respectively. Hence, the unknown parameter vector of the object is denoted . by Here we use a homogeneous intensity profile for modeling the object. This reduces the number of unknowns in our parameter estimation algorithm (see Section IV) and increases the processing speed. The microparticles, and hence the objects, are many in numbers in the microarray (see Fig. 4) and analyzing all of them in a reasonable time requires fast processing. Hence, we estimate an average intensity value per voxel of every object which is unlike estimating an inhomogeneous intensity profile. This is a trade-off between fast processing and analysis with a realistic object intensity distribution. Note that employing an inhomogeneous intensity profile in (1) makes the estimation analysis overly ill-posed and additional prior information on the object intensity is required for a satisfactory performance. To verify our model, we use real data and present a simple illustration to show that the bar-code-q-dot-light-intensity profile of a single microparticle resembles a sphere. We use BD algorithm for this analysis. We use the deconvblind command of MATLAB for all the BD operations in this paper with an initial PSF represented by ones in all the voxels. We assume that the blurred microparticle image as shown in Fig. 4 is analytically represented by the object convolved by the PSF with additive
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS
287
Fig. 2. Example of the target-capturing process: target solution, the ensemble before and after target capturing, and the final ensemble with q-dot-encoded nanoparticle tag.
Fig. 4. Bar-code-q-dot-light-intensity image on the focal plane of reference at 0 of an assembly of microparticles from a spot of the 3-D microarray device.
m
use an analytical estimation method to fit our proposed spherical-shaped object model in (1) to the statistical measurement model for estimating the center parameters, as we discuss in III.C. B. Three-Dimensional Gaussian PSF Model The PSF is the 3-D impulse response4 of the fluorescence microscope system used to characterize the out-of-focus light. The 3-D impulse response is not an exact 3-D impulse [11] in the fluorescence microscope, because the finite lens aperture introduces diffraction ring patterns in radial planes. In addition, the measurement setup usually differs from the manufacturer’s design specifications. Therefore, the microscope PSF becomes phase aberrated with symmetric features in radial planes and with asymmetric features along the optical direction [6]. The classical widefield microscope 3-D PSF model of Gibson and Lanni [17], [18] is given by
(2)
Fig. 3. A schematic view of (a) the focal (radial) plane, the optical direction, and the radial direction in a Cartesian coordinate system; (b) the meridional plane in a Cartesian coordinate system.
Gaussian noise [14] (see also Section III.C for more discussion). Following this assumption, we apply the BD algorithm to a randomly chosen microparticle image (see Fig. 5(a)). In Fig. 5(b), we show a meridional section of the resulting estimated object intensity profile, which resembles a sphere. This analysis justifies our proposed single-sphere object model. It is worth mentioning that the BD-based method follows conventional blind deconvolution techniques (see Appendix I) [15], [16]. However, BD-estimated object intensity images do not give precise estimates of the microparticle centers. Hence, we
is where is the Bessel function of the first kind, the wave number, is the wavelength of the fluorescent signal, is the numerical aperture of the microscope, and is the magnification of the lens. The vector contains the true and ideal measurement setup parameters: refractive index of the im, the specimen , and the cover-slip ; mersion oil and the coverslip ; disthickness of the immersion oil tance between the back focal and the detector planes ; depth at which the point source is located in the specimen; is distance from the in-focus plane to the point of evaluation, is the is the optical normalized radius in the back focal plane, and 43-D impulse response is the output intensity profile of the microscope system when the input is a point light source in space; a 3-D impulse. A 3-D impulse represents the limiting case of a light pulse in space made very short in volume maintaining finite volume integral (thus giving an infinitely high intensity peak) [14].
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
288
IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008
Fig. 5. Meridional sections of (a) a microparticle’s blurred bar-code-q-dot-light-intensity image and (b) the resulting BD-estimated object intensity profile. Meridional sections of (c) the BD-estimated PSF intensity profile from the image as shown in Fig. 5(a) and (d) its least-squares fitted version following the model (3).
path difference (OPD) function between corresponding systems in design and nondesign conditions [17]. The Gibson and Lanni PSF model in (2) is computationally expensive because the integration in the formula requires intensive numerical evaluation. Also, it is worth noting that our approach requires that a large number of microparticle images be processed as is evident from Fig. 4. Hence, we perform the estimation analysis using a 3-D Gaussian PSF model as proposed in [20] as follows: (3) where the unknown parameter vector is . The model (3) for representing the widefield fluorescence microscope PSF assumes that the Gaussian functions are centered at the origin of the PSFs and they are separable. The origin of in the PSF is denoted by the co-ordinate (2) and (3). The advantage of using a centered separable 3-D Gaussian PSF model is that it preserves the symmetry of the
Gibbson and Lanni PSF model along the focal planes and the asymmetry of the Gibbson and Lanni PSF model along the optical direction. We also assume in our proposed parameter estimation algorithm (as presented in Section IV) that the 3-D norm5, and Gaussian PSF is normalized according to the . It is worth mentioning that in [20], the authors hence, conclude that PSF model (3) is not-very-accurate approximation of the original PSF model in (2). However, we use this model in our analysis, since it needs a smaller number of unknown parameters to estimate an image with a large number of microparticles. In general, a 3-D PSF can be obtained by three different techniques: experimental, analytical, and computational [6]. In the experimental methods, images of one or more point-like objects are collected and used to obtain the PSF. These methods have the advantage that the PSF closely matches the experimental setup. However, images obtained with such point-like objects
= [x ; x ; . . . ; x ] . . . ; x ) [19].
5If x max jx j; jx j;
(
j
is a vector then its L
j
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
norm is
SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS
have a very poor SNR unless the system is specially optimized. In the analytical methods, the PSF is calculated using the classical diffraction-based Gibson and Lanni model (2). In the computational methods, it is preferable to estimate the PSF and object functions simultaneously by using BD algorithms [6]. Such is the case when all the PSF parameters are not known or a PSF measurement is difficult to obtain, To verify that PSF model (3) is useful for our application of interest, we use real data and present an illustration to show that (3) is quite similar to the widefield microscope PSF. We estimate the widefield microscope PSF from the real data using the BD algorithm. Furthermore, we use a least-squares fit [19] on the BD-estimated PSF using model (3) to visualize the similarity between them. In Fig. 5(c), we show the meridional section of the BD-estimated PSF that is obtained by applying the BD algorithm to the image as shown in Fig. 5(a). We fit this BD-estimated PSF using model (3), and the meridional section of the resultant image is shown in Fig. 5(d). The least-squares error is computed as 4.42%. This confirms that we do not lose much in estimation accuracy using PSF model (3). C. Statistical Measurement Model For the single spherical-shaped object (1), the imaging process is expressed analytically as
(4) is the output of the convolution process bewhere tween and at the voxel , is the convolution operation6 [21]. The CCD detector and attached to the fluorescence microscope behaves as a photon , and that counter, introducing Poisson noise to process is given by
(5) where is the reciprocal of the photon-conversion factor [21], is the number of photons measured in the [22], is a Poisson random variable with paCCD detector, and rameter . Since the q-dot light from the specimen produce very high SNR at the CCD detector, and assuming that the parameter value is sufficiently large, we approximate (5) using an additive Gaussian noise model [23]. Independent of these, the A/D conversion error is introduced in the measurement at the CCD detector output. We model this error using an additive Gaussian random variable. Under these assumptions and considerations, is given by the measurement
289
Fig. 6. Block diagram representation of the imaging process as discussed in Section III.
is additive Gaussian noise, independent where from voxel to voxel, with mean zero and variance depending on at the voxel , and is additive Gaussian noise independently and identically distributed (i.i.d.)7 from voxel to voxel. Note that in (6), represents the Gaussian approximation of the Poisson random variable in (5) and represents the A/D conversion process error of the CCD detector. In Fig. 6, we illustrate the whole imaging process using a block diagram. IV. PARAMETER ESTIMATION We develop a parameter estimation technique for estimating from the acquired the unknown parameters 3-D image segment of a single sphere. In this section, we first describe our developed estimation technique for unknown 3-D Gaussian PSF. Then, we compute the Cramér-Rao bound (CRB), which is the lowest bound on the variance of error of any unbiased estimator of the parameters in under certain regularity conditions [23]. The deconvolution problem is ill-posed since the unknown parameters in forward model (6) include: (i) object parameter vector , (ii) PSF parameter vector , and (iii) noise parameters. We consider the following approximation of (6). The random and are independent at variables every voxel. Hence, is a Gaussian random variable with zero mean and independent from voxel to voxel. We approximate the random variable with where is a Gaussian random variable with mean and variance [23]. We assume that follows and hence, follows . The is independent from voxel to voxel. measurement The likelihood of the measurement for a voxel coordinate is given by,
(7) We estimate
as follows,
(6) 6For two discrete functions a(x) and b(x), the convolution operation is given a(y )b(x y ); ( x; y ) [14]. by a(x) b(x) =
0
8f g2
7Statistically independent observations that are drawn from a statistically identical probability distribution function [23].
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
290
IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008
(8) stands for the argument of the minimum, i.e., the where value of the given argument, , for which the value of the given , attains its minimum value. Differentiating expression, with respect to and equating the resultant analytical expression with zero, we obtain,
least-squares estimates in a linear model when the noise is additive Gaussian and i.i.d. from one measurement to the other [25]. In the following discussion, we denote . We define
(14) We rewrite (13):
(15)
(9) where denotes
. Simplifying (9),
We assume the available measurements are , where with to with to and with to . With these assumptions and notations, we group the measurements into a vector form: (16)
(10) Note that we represent (10) in the quadratic form of , where , and . Since, is always positive, a unique solution to (10) exists when (i) and (ii) . We compute, and hence, satisfy (i). We compute for satisfying (ii). This relationship is in general true for images captured at high SNR. Solving (10),
where and tors whose
are
-dimensional vec-
components are and at the voxel coordinate , respectively, and similarly for . The log-likelihood function for estimating using forward model (16) is given by (17) denotes the Euclidean vector-norm operation8. The where ML estimate of the parameters (see, e.g., [24]) is
(11) We approximate and thus we have,
(18) (12)
The above approximations in (11)–(12) are acceptable for high SNR images produced by the q-dot based imaging and large . We conclude from (12) that the estimation of is essentially to the available measurement equivalent to fitting for each voxel coordinate . Hence, for estimating we approximate (6) as follows,
(13) as the model fitting where we assume is i.i.d. from voxel to error. The random variable voxel. Thus, the ML estimate of the parameters and in our analysis are equivalent to their corresponding least-squares estimate, which is in fact independent of the noise statistics. The ML estimates of the unknown parameters are equivalent to their
where stands for the argument of the maximum, i.e., the value of the given argument, , for which the value of the given expression, , attains its maximum value, is [25], and the projection matrix on the column space of is the complementary projection matrix [25], given as
(19) CRB: The Cramér-Rao bound (CRB) analysis is helpful to analyze the estimation accuracy using our approximated forward model (13). CRB is the lowest bound on the variance of any unbiased estimator under certain regularity conditions. It has the important features of being universal, i.e., independent of the algorithm used for estimation among the unbiased ones, and asymptotically tight, meaning that for certain distributions, 8If
kxk
x
=
= [x x
; x ; . . . ; x ] is a vector then its Euclidean norm is given by 1 1 1 + x [19].
+x
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS
there exist algorithms that achieve the bound as the number of samples becomes large. Note that the ML estimate achieves the CRB asymptotically. In brief, the CRB is a performance measure that can be used to determine the best accuracy we can expect for a certain model [25]. We compute the mean of the . We denote as the measurement vector as with th Fisher information matrix of dimension entry as (20) where is the th element of the parameter vector . The CRB for the unbiased estimates of the parameters in is derived from the diagonal elements of the matrix [25]. The partial derivative terms in (20) can easily be computed using the ; hence we do not include those details here. expression of
291
is the pupil function of the objective lens and is the unknown parameter vector of . The pupil amplitude is given as function (22) where is an unknown parameter, is the unknown cut-off is the frequency of the objective lens [26], and . The term is unknown parameter vector of given by (23) where as follows:
and
and
are given
(24) V. NUMERICAL EXAMPLES We present two numerical examples comparing the performance of our proposed parameter estimation method with the PBD- and BD-based methods. We also compare the mean-squares errors (MSE) of the estimated parameters in with their analytical bounds on the variance of error. A. Examples: Data Generation 1) Example 1: In this example, we aim to present the robustness of our proposed algorithm by comparing its performance with the BD- and PBD-based methods. Here, we simulate the data following the true image distortion phenomenon. In detail, we produce data by not following exactly the object model (1), the PSF model (3), and the forward model (15). Here we consider (i) the situation in which the q-dots are randomly placed, and each of their light intensity profile follows a 3-D Gaussian function in space inside each microparticle; (ii) the Gibson and Lanni PSF model (2); (iii) the under-sampling phenomenon encountered while acquiring the data in real life; (iv) the photon-count process in the CCD detector; and (iv) additive A/D conversion noise at the CCD detector output. We generate image data for a single-sphere-object of radius on a 551 551 551 sampling lattice with voxel nm. Each q-dot light intensity prosize file in space is produced using a symmetric 3-D Gaussian funcnm. Here we assume tion with variance parameter that such light intensity profile is deterministic and is distributed of the Gaussian in space based on the variance parameter function. The q-dot centers are placed in each voxel location inside the object with probability following a binomial random with and . We assume a large variable numerical value for the maximum light intensity of each q-dots in their corresponding center location. We generate the synthetic PSF following a parametric PSF model as proposed in [7]. In this model, the exponential phase-term inside the Gibson and Lanni model (2) is replaced , where by (21)
where
is the unknown parameter of
, and (25)
where is the unknown parameter vector of . Hence, is the unof the obknown parameter vector of the pupil function jective lens. We generate synthetic PSF data using the values: and . We generate the microscope output by convolving the simulated object image with the PSF generated over the same grid. Image formation is a continuous space process, whereas the deconvolution takes place on sampled images. To incorporate this difference on our simulation, we sample every fifth voxel intensity along every dimension to obtain a reduced image with voxels and . Using in this image, we generate the photon counts the CCD detector following (5) with . We generate the output data on the CCD detector by introducing random, additive, and i.i.d. Gaussian noise of mean zero and variance with (see (5)). The noise introduced is i.i.d. from voxel to voxel. We vary following , where is the maximum intensity value of for . In Fig. 7, we show the MSEs of the estimated microparas a function of , which is essenticle center parameter tially a linear function of the A/D conversion noise variance at , the CCD detector output, since [dB]. where 2) Example 2: In this example, we aim to compare the performance of estimation of the unknown parameters with their corresponding CRBs using our proposed algorithm and BD- and PBD-based algorithms. We simulate the data here using the forward model as in (15). We generate image data for a single-sphere-object of radius on a 111 111 111 sampling lattice with voxel . We use for the data size generation. We generate the synthetic PSF following (3).
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
292
IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008
We generate the output of the microscope and the CCD detector by convolving the simulated object image with the microscope PSF generated over the same grid with additive measurement noise following forward model (13). The data in this example are generated using forward model (13) with random, additive, and i.i.d. Gaussian noise of mean zero and variance . We plot the MSEs of the estimated microparticle center paramas a function of SNR for this data in Fig. 8. We define eter SNR in Example 2 as
(26) B. Parameter Estimation For both the examples, we perform the estimation using the following three techniques and compare their performances. 1) Our proposed parametric method: We estimate the microparticle center parameters using (18). We assume that the sphere parameters and the PSF parameters are unknown. 2) Parametric blind-deconvolution (PBD)-based method (see Appendix I) [7]: We estimate the object function as follows:
Fig. 7. Mean-square errors of the estimated microparticle center parameter x as a function of varying A/D conversion noise variance at the CCD detector output. The simulated data is generated following the original physical phenomenon (see Section V). Our proposed method outperforms the BD- and PBD-based methods for low A/D conversion noise levels.
(27) where and denote the object, PSF, and output measurement, respectively; is the estimated object function at the th iteration; and [7]. In this paper, we asis known a priori for sume that the PSF estimating the object function using PBD. We define as the th iteration. We continue iterating (27) error at the . until 3) Conventional blind-deconvolution (BD)-based method (see Appendix I) [8]: In this method, we simultaneously estimate the unknown PSF and the object functions using the deconvblind command of MATLAB. Microparticle Location Estimation: Our proposed method (see Section IV) directly estimates the microparticle center pa. However, the BD and PBD algorithms do rameters not estimate these quantitatively. In those cases, we first transform the voxel intensities of the estimated object functions to zero below a certain threshold level. Then we average the voxel coordinates with nonzero intensity values to estimate the microparticle centers. We present the estimation performances of the numerical examples in Figs. 7 and 8 (see also the following Section V.C for more discussion). C. Results and Discussion In Example 1 (see Fig. 7) our method does not perform as well as the BD- and PBD-based methods at very low SNR. This
Fig. 8. Mean-square-errors and Cramér-Rao bound of the estimated microparticle center parameter x as a function of SNR. The data are generated using forward model (13) (see Section V). Our algorithm achieves the Cramér-Rao bound; the other two do not. Also, our proposed method significantly outperforms the other two methods in high signal-to-noise ratio regions.
example confirms that the Gaussian PSF model is reasonable for solving the deconvolution problem for high SNR imaging of simple object shapes. In Example 2 (see Fig. 8) at the low SNR, PBD outperforms our proposed method but the estimation using both the PBD and BD fail to achieve the CRB. In the same example, our method performs the best among the three starting from 5 dB and also achieves the CRB. In summary, in both the examples we observe that (i) our proposed method always performs better in the high SNR regions and (ii) BD performs worst among the three. On the other hand, the computational speed of BD is the highest among all the three algorithms. It is worth mentioning that q-dot based imaging produces high SNR images in the practical experiments. The main reason for the better performance of our algorithm is that it contains a prior information of the object shape (spherical), whereas the other methods
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS
293
Fig. 9. Bar-code-q-dot-light-intensity image on the focal plane of reference at 0 m of a section of seven microparticle images from a spot of the 3-D microarray device, (b) their image after a thresholding operation, (c) their binary image where the red color signifies their foreground shape, and (d) their segmented versions in which different colors signify separated single-sphere-objects.
(BD and PBD) do not carry that assumption. Note that we assume the PSF is known while evaluating the PBD algorithm, since an unknown PSF-based PBD analysis requires an extensive computational load [7]. We conclude that our proposed estimation method performs significantly better than the two existing methods for the high SNR data. VI. ESTIMATION RESULTS USING REAL DATA In this section, we present a quantitative estimation analysis of a real-data set using our method and compare the performance with the BD algorithm. We do not use the PBD algorithm for this performance comparison, although it performs better than the BD in the numerical examples. We make this selection because the PSF is unknown for the real-data set; hence the PBD requires an optimal model selection analysis for choosing and estimating the unknown parameters of in (27). This requirement makes PBD a time-intensive algorithm for processing an image with a large number of microparticles, as shown in Fig. 4. The BD algorithm is faster and does not perform much worse than the PBD algorithm (see Figs. 7 and 8).
Experiment Details: We apply our proposed algorithm and the BD-based estimation algorithm to a section of the 3-D sphere images acquired on a 1036 1360 51 sampling lattice , and with a sampling interval of 0.16 m along the directions. We extract a section of seven microparticle images on a 241 340 51 sampling lattice from the original data to test the estimation performance. The radius of the spheres is m by experimental convention [4], [5]. The known as particles are observed using a standard widefield microscope with , and mm. The images are captured by attaching a cooled CCD camera to the microscope. Image Segmentation: We segment single-sphere-shaped images of the seven microparticles using the -means clustering algorithm9. The -means algorithm is evaluated using the kmeans command of MATLAB [8]. We note that sometimes the microparticle images merge into groups of two or three (see Fig. 4) and hence are optically indistinguishable. This requires per9The k -means algorithm is a clustering algorithm to cluster n objects based on attributes into k partitions; k < n.
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
294
IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008
TABLE I COMPARISON OF ESTIMATION PERFORMANCES USING OUR PROPOSED AND BLIND DECONVOLUTION-BASED METHODS. WE USE THREE-DIMENSIONAL IMAGE SEGMENT OF SEVEN MICROPARTICLES REAL DATA AS SHOWN IN FIG. 9. THE ORIGIN IS DEFINED AT THE CENTER OF THIS SEGMENT. EUCLIDEAN DISTANCES x ) + (y y ) + (z z ) (m) AND (II) THE ESTIMATED BETWEEN: (I) THE ESTIMATED CENTERS ARE COMPUTED AS (x A INTENSITIES ARE COMPUTED AS A
0
forming our proposed estimation analysis using a two-sphereobject or a three-sphere-object model for greater accuracy at the cost of more computation time. Evaluation of the kmeans requires a prior knowledge of the cluster-number. For the single-sphere-object-based models, the total number of clusters is equivalent to the number of microparticles in the image that we analyze. However, for the case of multiple-sphere-object-based models, the number of clusters in the image is hard to determine and requires an added model selection analysis, resulting in an increasing computational load. Hence, we refrain from processing the image using multiplesphere-object-based models. We employ a clustering algorithm here for the purpose of presenting an automated imaging analysis framework with many microparticles image. In the first step of such automated analysis, the microparticles’ image sections are clustered. Then each microparticle location is estimated from its corresponding image section using our proposed algorithm. In order to present this automated analysis, we confine ourselves to evaluate using a segment of seven microparticles’ image only. However this framework is general and applicable for any number of microparticles’ image in the microarray. Note that we can also employ other clustering algorithm, such as mixture of Gaussians based method, for this purpose. Microparticle Localization and Quantification: We localize and quantify each segmented single-sphere-object using our proposed algorithm and the BD algorithm from the block of seven microparticle images. The BD algorithm does not estiand quantitatively. mate the unknown parameters Hence, for localization using this algorithm, we adopt a similar method as we describe in Section V.B. For quantification, we first average the intensities above a certain threshold level of the estimated objects at the voxels with nonzero intensity values. Finally, we multiply those average values with the corresponding maximum PSF intensity values for estimating the parameter inside the microparticles at a specific wavelength. Results and Disussion: In Fig. 9, we present our analysis for the real-data set. Here we define the origin at the center of the seven-microparticle block. In Fig. 9(a), we show the bar-
j
0 0
j
0
code-q-dot-light-intensity image on the focal plane of reference of the microparticle image section at 0 m. We set the voxel intensities to zero below a certain threshold in this 3-D image, and the resultant image is shown in Fig. 9(b). The binary version of this image is shown in Fig. 9(c), where we indicate the nonzero intensity regions in red. Finally, in Fig. 9(d) we present their segmented version, where different colors specify separated single-sphere-objects. The results of our proposed algorithm and the BD-based algorithm are presented in Table I. In this table, we include the Euclidean distances between the center location estimates and the intensity level estimates obtained using the two methods. From the table we observe that evaluation using both methods yields similar results for Spheres 1 and 2. This is expected since q-dot lights are captured at high SNR for real data. Recall that in the numerical example of Section V.A.1 we showed that our proposed and BD methods perform similarly for measurements captured at high SNR. However, the estimation results are slightly different using the two methods for Spheres 3, 5, 6, and 7 even though these spheres’ images were captured at high SNR. The reason is that these spheres merge into groups of two in both of these cases. Hence, their intensities contribute to each other during the convolution operation. Hence, fitting single-sphere-object models separately to each segmented single sphere image data of a merged group of two sphere images does not computationally perform well. Instead, estimation analysis of these spheres can be improved using a two-sphere-object-based analysis. Apart from this, Sphere 4 does not produce a consistent result using either method. In this case, it is not possible to know if the image is produced by the q-dot light of any microparticle. It might be the case that q-dots from a damaged microparticle are illuminated there since the estimated signal is much weaker.
VII. COMPARISON WITH [9] We employ optically encoded microparticles randomly placed in a microarray for massively parallel and high
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS
throughput analysis of biological molecules [9]. The microparticles contain multicolor q-dots and thus built in codes for rapid target identification. Such microparticles’ surfaces are conjugated with bio-molecular probes for target detection while identification codes are embedded in the microparticles’ interior [9]. In addition, in our work, we also employ secondary nanoparticles embedded with q-dots for on-off signaling and enhancing detection sensitivity. The q-dot light colors of the nanoparticles are chosen unique irrespective of the targets and their illumination wavelength is different than the wavelengths of the microparticles’ q-dots. Though our real data experiment is confined to microparticles embedded with single color q-dots, we can extend our work to target-captured microparticles embedded with multicolor q-dots. Note that q-dot lights from the nanoparticles of a target captured microparticle ideally forms a shell-shaped object and similarly q-dot lights from a microparticle ideally forms a sphere-shaped object. In a microparticle, the q-dots are more dense near the periphery rather than the interior part [9]. However, no experimental evidence exists in the literature describing such distribution. Hence, in our work we assume a homogeneous intensity profile of the microparticles. This assumption can be extended to a more realistic modeling scenario for a microparticle with increasing intensity level from the center location toward the periphery. Each q-dot light intensity follows a 3-D Gaussian profile in space [9]. However, there exists no data to model the spatial spread of such profile for a single q-dot since q-dots are imaged in groups [9]. In our numerical examples (Section V), we assumed that such spread is approximately in nm range. This assumption is realistic since the q-dots that we employ for experimental data collection have diameter 6 nm in each.
practical high SNR scenarios using our proposed method. The performance comparison of the parameter estimations with the corresponding CRBs as presented in the numerical Example 2 (see Section V) supports the use of the Gaussian PSF and Gaussian noise models for simple symmetric object images captured in high SNR using fluorescence microscopes. In summary, in numerical examples, our proposed simple parametric ML-based image deconvolution method performs better for simple object shapes in high SNR than the conventional methods. Our proposed analysis based method performs similarly as the conventional BD method for only the well-separated microparticles real data high-SNR images. This motivates us to to calibrate and re-design our proposed microarray device experimentally which we plan to perform in our future research. Namely, we will compute the optimum distance between the microparticles using statistical hypothesis testing methods to find the desired separation that guarantees no cross-talk between the microparticles during imaging after the target-capturing process. This will enable us to place the microparicles in predefined holes that will be separated by an optimum distance that we will compute. Such implementation will expedite the first step segmentation process during the imaging analysis. Namely, it will be sufficient to segment the microparticles image using a simple grid-based technique [28] instead of using other clustering algorithms. APPENDIX I We discuss the PBD- and BD-based methods. We first review a BD method based on Richardson-Lucy (RL) algorithm. The RL algorithm was originally developed from Bayes’s theorem. Bayes’s theorem is given by
(28)
VIII. CONCLUSION We addressed the problem of locating q-dot-encoded microparticles from 3-D images of an ultra-high density 3-D microarray. Our future aim is to quantify the multiple targets that this device will capture using the microparticle location estimation as prior information. Such analysis is a two-stage estimation process. Our work is motivated by the fact that in fluorescence microscopy imaging when the object shape is simple, it can be easily modeled using simple parametric forms with the unknown information of interest as parameters. Then, a problem-specific deconvolution algorithm can be developed using popular ML- or least-squares-based techniques. This avoids the need of using commercial deconvolution software, which is often costly and time intensive to run. Simple and symmetric object models motivate our use of Gaussian PSF model for the widefield fluorescence microscope. In experiments, q-dot based imaging produces high SNR images. This motivates modeling the statistical measurement using a Gaussian noise statistics. Using these practical assumptions, we developed a more parametric image deconvolution algorithm using the concentrated likelihood formulation [24] as presented in (18). We achieve robust estimation performance in the
295
where
is the conditional probability of the event given is the probability of the event , and is the inverse conditional probability, i.e., the probability of event given event . The probability can be identified as the ob; the conditional probability can be ject distribution identified as the PSF centered at , i.e., ; and the probcan be identified as the degraded image . The ability inverse relation of the iterative algorithm is given by, (29) where is the iteration number. Under an isoplanatic condition, we write (29) as, (30) When the PSF is known, one finds the object by iteris required ating (30) until convergence. An initial guess for the object to start the algorithm. Advantage of this algorithm includes a non-negativity constraint if the initial guess
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
296
IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008
. Also energy is conserved in these iterations. The blind form of the RL algorithm is given by,
(31)
(32) At the th blind iteration of this algorithm both (31) and (32) are th blind step performed assuming estimated object at the is known in (31). For each of (31) or (32), a specific number of RL iterations are performed as specified by . The loop is repeated until convergence. Extension of the above algorithms and are refor 3-D images is trivial. Initial guesses quired to these algorithms. No positivity constraints are required because the above algorithms ensure positivity. MATLAB-based BD algorithm uses an accelerated and damped version of the RL algorithm specified by (31) and (32), [8]. Note that similar algorithm specified by (31) and (32) can be developed using expectation maximization (EM) framework is Poisson noise corrupted assuming that the measurement . Motivated by this, the version of the convolution PBD algorithm was derived in [7] assuming the measurement as that follows the forward model (5). There an EM formulation [25] is employed to maximize the likelihood. Also the authors proposed a parametric PSF model as we reviewed in the Section V.A.1. In the PBD algorithm, the object is updated using a similar iterative form as (30) (see (27)). In our numerical example (see Section V.A.1), we assumed that the PSF is known in (27) for simplicity. However, in the original work the authors also proposed a numerical method to update the PSF parameters at each PBD step [7]. ACKNOWLEDGMENT We are grateful to Professor David Kelso of Northwestern University for providing us with the real-data images for this research. REFERENCES [1] P. W. Stevens, C. H. J. Wang, and D. M. Kelso, “Immobilized particle arrays: Coalescence of planar and suspension-array technologies,” Anal. Chem., vol. 75, no. 5, pp. 1141–1146, 2003. [2] P. W. Stevens and D. M. Kelso, “Imaging and analysis of immobilized particle arrays,” Anal. Chem., vol. 75, no. 5, pp. 1147–1154, 2003. [3] G. M. P. van der Kempen, L. J. van Vliet, P. J. Verveer, and H. T. M. van der Voort, “A quantitative comparison of image restoration methods for confocal microscopy,” J. Microscopy, vol. 185, no. 3, pp. 354–365, 1997. [4] A. Mathur and D. M. Kelso, “Segmentation of microspheres in ultra-high density microsphere-based assays,” in Proc. SPIE, 2006, vol. 6064, pp. 60640B1–60640B10. [5] A. Mathur, “Image Analysis of Ultra-High Density, Multiplexed, Microsphere-Based Assays,” Ph.D. Thesis, Northwestern Univ., Evanston, IL, 2006.
[6] P. Sarder and A. Nehorai, “Deconvolution methods of 3D fluorescence microscopy images: An overview,” IEEE Signal Process. Mag., vol. 23, no. 3, pp. 32–45, 2006. [7] J. Markham and J. Conchello, “Parametric blind deconvolution: A robust method for the simultaneous estimation of image and blur,” J.Opt. Soc. Amer. A, vol. 16, no. 10, pp. 2377–2391, 1999. [8] MATLAB Software Package, The MathWorks [Online]. Available: http://www.mathworks.com/products/matlab/ [9] M. Han, X. Gao, J. Z. Su, and S. Nie, “Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules,” Nature Biotechnol., vol. 19, pp. 631–635, 2001. [10] Crystalplex Corp.. Pittsburgh, PA [Online]. Available: http://www. crystalplex.com [11] J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello, “Threedimensional imaging by deconvolution microscopy,” Methods, vol. 19, no. 3, pp. 373–385, 1999. [12] P. J. Verveer, “Computational and Optical Methods for Improving Resolution and Signal Quality in Fluorescence Microscopy,” Ph.D. Thesis, Delft Technical Univ., Delft, The Netherlands, 1998. [13] H. T. M. van der Voort and K. C. Strasters, “Restoration of confocal images for quantitative image analysis,” J. Microscopy, vol. 178, no. 2, pp. 165–181, 1995. [14] A. V. Oppenheim, A. S. Willsky, S. Hamid, and S. H. Nawab, Signals and Systems, 2nd ed. Englewood Cliffs, NJ: Prentice Hall, 1996. [15] M. P. van Kempen, “Image Restoration in Fluorescence Microscopy,” Ph.D. Thesis, Delft Technical Univ., Delft, The Netherlands, 1999. [16] D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag., vol. 13, no. 3, pp. 43–64, 1996. [17] S. F. Gibson and F. Lanni, “Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy,” J. Opt. Soc. Amer. A, vol. 8, no. 10, pp. 1601–1612, 1991. [18] T. J. Holmes, “Maximum-likelihood image restoration adapted for non coherent optical imaging,” J. Opt. Soc. Amer. A, vol. 5, no. 5, pp. 666–673, 1988. [19] G. Strang, Introduction to Linear Algebra, 3rd ed. Wellesley, MA: Wellesley-Cambridge Press, 2003. [20] B. Zhang, J. Zerubia, and J. O. Martin, “Gaussian approximations of fluorescence microscope point spread function models,” Applied Opt., vol. 46, no. 10, pp. 1819–1829, 2007. [21] D. L. Snyder and M. I. Miller, Random Point Processes in Time and Space, 2nd ed. New York: Springer-Verlag, 1991. [22] T. M. Jovin and D. J. Arndt-Jovin, “Luminescence digital imaging microscopy,” Annu. Rev. Biophys. Biophys. Chem., vol. 18, pp. 271–308, 1989. [23] A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, 4th ed. New York: McGraw-Hill Science, 2001. [24] B. Porat, Digital Processing of Random Signals: Theory and Methods. Englewood Cliffs, NJ: Prentice Hall, 1994. [25] S. M. Kay, Fundamentals of Statistical Signal Processing, Estimation Theory. Englewood Cliffs, NJ: PTR Prentice Hall, 1993. [26] J. A. Conchello and Q. Yu, C. J. Cogswell, G. Kino, and T. Willson, Eds., “Parametric blind deconvolution of fluorescence microscopy images: Preliminary results,” in Three-Dimensional Microscopy: Image Acquisition and Processing III Proc. SPIE, 1996, vol. 2655, pp. 164–174. [27] Y. M. Cheung, “k-means—A Generalized k-means clustering algorithm with unknown cluster number,” in Proc. 3rd Int. Conf. Intell. Data Eng. Automat. Learn. (IDEAL’02), Manchester, U.K., Aug. 12–14, 2002, pp. 307–317. [28] Y. H. Yang, M. J. Buckley, S. Dudoit, and T. P. Speed, “Comparison of methods for image analysis on cDNA microarray data,” J. Comput. Graph. Stat., vol. 11, pp. 108–136, 2002. Pinaki Sarder (S’03) received the B.Tech. degree in electrical engineering from Indian Institute of Technology, Kanpur, in 2003. He is currently a Ph.D. student and research assistant in the Department of Electrical and System Engineering at Washington University in St. Louis (WUSTL). His research interest is in the area of statistical signal processing with focus on medical imaging and genomics. Mr. Sarder is a recipient of the Imaging Sciences Pathway program fellowship for graduate students at WUSTL starting from January 2007 to December 2007.
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.
SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS
Arye Nehorai (S’80-M’83-SM’90-F’94) received the B.Sc. and M.Sc. degrees in electrical engineering from the Technion, Israel, and the Ph.D. degree in electrical engineering from Stanford University, California. From 1985 to 1995, he was a faculty member with the Department of Electrical Engineering at Yale University. In 1995 he joined as Full Professor the Department of Electrical Engineering and Computer Science at The University of Illinois at Chicago (UIC). From 2000 to 2001 he was Chair of the department’s Electrical and Computer Engineering (ECE) Division, which then became a new department. In 2006 he became Chairman of the Department of Electrical and Systems Engineering at Washington University in St. Louis. He is the inaugural holder of the Eugene and Martha Lohman Professorship and the Director of the Center for Sensor Signal and Information Processing (CSSIP) at WUSTL since 2006.
297
Dr. Nehorai was Editor-in-Chief of the IEEE TRANSACTIONS ON SIGNAL PROCESSING during 2000–2002. From 2003 to 2005 he was Vice President (Publications) of the IEEE Signal Processing Society, Chair of the Publications Board, member of the Board of Governors, and member of the Executive Committee of this Society. From 2003 to 2006 he was the founding editor of the special columns on Leadership Reflections in the IEEE Signal Processing Magazine. He was co-recipient of the IEEE SPS 1989 Senior Award for Best Paper with P. Stoica, co-author of the 2003 Young Author Best Paper Award and co-recipient of the 2004 Magazine Paper Award with A. Dogandzic. He was elected Distinguished Lecturer of the IEEE SPS for the term 2004 to 2005 and received the 2006 IEEE SPS Technical Achievement Award. He is the Principal Investigator of the new multidisciplinary university research initiative (MURI) project entitled Adaptive Waveform Diversity for Full Spectral Dominance. He has been a Fellow of the IEEE since 1994 and of the Royal Statistical Society since 1996. In 2001 he was named University Scholar of the University of Illinois.
Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.