Segmentation is the process of reducing an image ... Segmentation may divide
images into either anatomic, ... In MATLAB it is a simple matter to perform.
Lecture 19 Image Segmentation
Segmentation • Segmentation is the process of reducing an image to regions that correspond to structural units, or to some specific property. • For medical imaging purposes, segmentation is used to extract specific organs, vascular structures, tissue types (e.g. in brain: grey, white, ventricles/CSF), or lesions.
Utility of Segmentation • In medical imaging, segmentation is important for feature extraction, image measurements, and image display. • Segmentation may divide images into either anatomic, pathological, systemic or functional regions. • No single segmentation technique can produce satisfactory results for all medical imaging applications. • Segmentation may be manual, semi-automatic or automatic depending on the complexity of the task.
Segmentation • In general, the result of a segmentation operation will be one or more images, each containing one specific feature from the original image, and exactly registered to the original. • Extracted images may have pixel values corresponding to the original image, or may be binary (i.e. 1 if the pixel corresponds to the target feature of the original, 0 otherwise).
Segmentation • Common segmentation methods: – Thresholding • Global thresholding • Local (adaptive) thresholding • Image preprocessing and thresholding
– – – – –
Edge detection (discontinuities) Region growing Parametric (especially with MR imaging) Texture thresholding Multi-spectral techniques
Segmentation • The simplest form of segmentation uses thresholding. • This is simply a pixel operation in which a range of intensities is specified for each structure to be extracted. • This method is applicable if the CNR is adequate. • Thresholding is often used for MR and CT angiographic studies where there is significant blood/tissue contrast, as well as for bone in X-ray methods.
Segmentation • A benefit of the segmentation process for these angiographic studies is that it is a simple matter to count the pixels representing the vessel lumen in order to determine the cross-section area of the vessel for multiple images acquired across the cardiac cycle. • Combined with quantitative physiological information such as blood pressure, this allows for determination of vessel compliance, which decreases with atherosclerosis.
Segmentation • Such segmented images can also be used to form 3D surface renditions of structures which is a valuable tool in surgical planning (surgical trajectory planning, i.e the path from the incision to the structure of interest).
Segmentation MATLAB Implementation • In MATLAB it is a simple matter to perform threshold segmentation. Simply generate a mask image based on the selected threshold, e.g. b=a>500. • Use the nonzeros operation to get a count of the number of selected pixels in the mask image, e.g. c=nnz(b); • Multiply c by the area of the pixel to get the area of the structure.
Segmentation Thresholding: Various types • Thresholding can be based on several image attributes such as the histogram or local properties such as mean, standard deviation, or gradient. • When only one threshold is selected for an entire image, the technique is said to be “Global”. • If the technique is dependent on say a local average gray value, it is “local”. • Further, if local thresholds are selected independently for each pixel (or group of pixels) is it said to be a “dynamic” or “adaptive” technique.
Segmentation Global Thresholding • Global thresholding assumes that an image has a bimodal histogram. • Therefore, the object can be extracted from the background by a simple operation that compares image values to a threshold value T. • The object and the background pixels have gray levels grouped into two dominant modes. • An obvious way to extract the object is to select a threshold which separates these two values.
Segmentation Image pixel distribution: Bimodal
Segmentation Global Thresholding • The thresholded image g(x,y) is defined as g(x,y) = 1 if (x,y) > T = 0 if (x,y) < T • The result of such a thresholding is a binary image, where pixels with an intensity value of 1 represent the object and intensity values of 0 correspond to the background.
Segmentation Global Thresholding • Example of global thresholding. A: Original image B: Image histogram C: Thresholded image, T=127 D: Outline with a 3x3 Laplacian filter
Segmentation Global Thresholding • Results of thresholding operations are usually displayed as a contour map and superimposed on the original image. • If needed, an operator can manually modify parts of the image to suit a specific application. • In many cases, appropriate segmentation is obtained when the area or perimeter of the objects is minimally sensitive to small variations of the selected threshold level.
Segmentation Global Thresholding • Example of the sensitivity of a threshold level selection. A: Cross-sectional intensity profile of a light object on a dark background; B: Hypothetical plot of the area (A) or perimeter (P) versus thresholding level.
Segmentation Global Thresholding • If an image contains more than two types of regions, it may still be possible to segment it by applying several individual thresholds, or by using a multi-thresholding technique. • With increasing number of thresholds however, the histograms become difficult to distinguish and hence thresholding is no longer optimal. • This technique is computationally simple and fast but fails when there is low contrast between object and background, image is noisy, or background varies significantly across the image.
Segmentation Multiple segmentation operations
• Separation of brain tissue types
Segmentation Local Thresholding •
Local thresholds can be determined by: 1. Splitting an image into sub-images and calculating thresholds for each sub-image, or 2. Examining image intensities in the neighborhood of each pixel. • These techniques are computationally more expensive and could be used for images with varying backgrounds. • They also work well for extracting regions which are very small
Segmentation • Edge detection is used as a segmentation method. • This serves a number of purposes, including definition of boundaries for subsequent segmentation processes. • In general the goal is to define threshold criteria for wI/wx and wI/wy taking into account the CNR. • In some cases, edge detection can be carried out in the frequency domain.
Segmentation • There are many derivative filter weight kernels that can be applied in the spatial domain, including the Sobel and Prewitt methods seen in earlier lectures. • These are applied across the entire image in convolution fashion. • For 3x3 cases, these can be oriented for vertical, horizontal, or diagonal (45º) detection as shown in these examples.
Segmentation Some 3x3 filter matrices
Segmentation • Multiple kernels representing different orientations (vertical, horizontal, 45º) can be applied to the same image. • For a given pixel, compute a response to each kernel. • This permits assignment of a responding pixel to a line of specific orientation. • As seen with the filtering kernels, larger gradient (edge) kernels will produce a less noisy result, but at the expense of extracted edge sharpness as seen in this example comparing 3x3 and 5x5 Sobel magnitude images.
Segmentation • The typical approach for detecting edges is identifying discontinuities in gray levels. • An edge is a set of connected pixels that lie on the boundary between two regions. • The standard approaches are: – First derivative (Gradient operator) – Second derivative (Laplacian operator)
• Order of derivative used depends on edge width and noise content.
Segmentation • Thin vs. thick edges:
Segmentation • The slope of a ramp is inversely proportional to the extent of blurring. • A thick edge is more than one pixel thick. • An edge point can be any point within the ramp. The thickness of the edge is the length of the intensity ramp. • Examine first and second derivatives of a thick edge (linear).
Segmentation • Another approach, suitable for high resolution images is to compute the sum of the squares of the orthogonal derivatives to give a result independent of orientation (Sobel). 2
dI
2
§ wI · § wI · ¨ ¸ ¨¨ ¸¸ © wx ¹ © wy ¹
the signs of the derivatives would be reversed for an edge that transitions from light to dark
Segmentation • Note that the second derivative produces two responses for each edge, which is not necessarily a desirable feature. • However, a boundary connecting the extreme positive and negative values of the second derivative would cross zero (for a linear ramp) at or near the midpoint of the edge. • This could be helpful in fixing an edge location.
Segmentation • Even modest levels of noise can have a large impact on the derivative results used in edge detection. • Smoothing may be necessary prior to the use of derivatives for edge detection (this can include thresholded median filters, for example),
1st column are images and gray level profiles of a ramp edge with Gaussian noise with mean=0,1,5,10. 2nd column shows the first derivatives and profiles. 3rd column shows the second derivatives and profiles.
Segmentation Direction • It is also possible, in addition to determining the magnitude of the intensity change across the border, to also compute a direction value (Lineberry, 1982) at each pixel as:
§ wI / wy · arctan ¨ ¸ I / x w w © ¹
Direction information in a segmented image