Review of Digital Image Processing System

0 downloads 0 Views 271KB Size Report
essentially free of atmospheric effects. This can be ... values accounts for the low contrast ratio of the original image. ... represent pastel shades, whereas high values represent purer and more .... the image a small area, called a training site, which .... [9] Holkenbrink, 1978, Manual on characteristics of Landsat computer-.

International Journal of Research and Reviews in Information Sciences Vol. 1, No. 1, March 2011 Copyright © Science Academy Publisher, United Kingdom Science Academy Publisher

Review of Digital Image Processing System Mir Mohammad Azad1 and Md. Mahedi Hasan2 1

Department of Computer Science and Engineering, Shanto Mariam University of Creative Technology, Dhaka, Bangladesh Department of Business Administration, Prime University, Dhaka, Bangladesh


Correspondence should be addressed to Mir Mohammad Azad, [email protected]

Abstract – Different types of remote sensing images are routinely recorded in digital form and then processed by computers to produce images for interpreters to study. The simplest form of digital image processing employs a microprocessor that converts the digital data tape into a film image with minimal corrections and calibrations. At the other extreme, large mainframe computers are employed for sophisticated interactive manipulation of the data to produce images in which specific information has been extracted and highlighted. Keywords – Digital Image, Image Process, Digitizing image.



Digital processing did not originate with remote sensing and is not restricted to these data. Many image-processing techniques were developed in the medical field to process Xray images and images from sophisticated body-scanning devices. For remote sensing, the initial impetus was the program of unmanned planetary satellites in the 1960s that telemetered, or transmitted, images to ground receiving stations. The low quality of the images required the development of processing techniques to make the images useful. Another impetus was the Landsat program, which began in 1972 and provided repeated worldwide coverage in digital format. A third impetus is the continued development of faster and more powerful computers, peripheral equipment, and software that are suitable for image processing. This thesis describes and illustrates the major categories of image processing. The processes are illustrated with Landsat examples because these are the most familiar and are readily available to readers; the digital processes, however, are equally applicable to all forms of digital image data. Apart from that we come across the image Image restoration, Image enhancement and Information extraction. Whose can utilize data error, noise, geometric distortions, visual impact that the image has on the interpreter and utilize the decision-making capability of the computer to recognize and classify pixels on the basis of their digital signatures.


Literature Review and Methodology

2.1. Image Structure One can think of any image as consisting of tiny, equal areas, or picture elements, arranged in regular rows and columns. The position of any picture element, or pixel, is determined on an xy coordinate system; in the case of Landsat images, the origin is at the upper left corner of the image. Each pixel also has a numerical value, called a digital number (DN), that records the intensity of electromagnetic energy measured for the ground resolution cell represented by that pixel. Digital numbers range from zero to some higher number on a gray scale. The image may be described in strictly numerical terms on a threecoordinate system with x and y locating each pixel and z giving the DN, which is displayed as a gray-scale intensity value. Images may be originally recorded in this digital format, as in the case of Landsat. An image recorded initially on photographic film may be converted into digital format by a process known as digitization.

2.2. Digitization Procedure Digitization is the process of converting an image recorded on photographic film (radar, thermal IR, or aerial photographs) into an ordered array of pixels. Maps and other information may also be digitized. The digitized information is recorded on magnetic tape for subsequent computer processing. Digitizing systems belong to several categories: drum, flatbed, flying spot, video camera, and linear array.

International Journal of Research and Reviews in Information Sciences


recorded. Linear-array digitizers are similar to the alongtrack scanners described in Chapter 1. An optical system projects the image of the original photograph onto the focal plane of the system. A mechanical transport system moves a linear array of detectors across the focal plane to record the original photograph in digital form. The digitized image is recorded on magnetic tape that can be read into a computer for various processing operations. The processed data are then displayed as an image on a viewing device or plotted onto film as described in the following section.

A typical drum digitizing system Figure 1.1 consists of a rotating cylinder with a fixed shaft and a movable carriage that holds a light source and a detector similar to a photographic light meter. The positive or negative transparency is placed over the opening in the drum, and the carriage is positioned at a corner of the film. As the drum rotates, the detector measures intensity variations of the transmitted light caused by variations in film density. Each revolution of the drum records one scan line of data across the film. At the end of each revolution the encoder signals the drive mechanism, which advances the carriage by the width of one scan line to begin the next revolution.Width of the scan line is determined by the optical aperture of the detector, which typically is a square opening measuring 50 m on a side. The analog electrical signal of the detector, which varies with film density changes, is sampled at 50 m intervals along the scan line, digitized, and recorded as a DN value on magnetic tape. Each DN is recorded as a series of bits, which form an ordered sequence of ones and zeroes. Each bit represents an exponent of the base 2. An eight-bit series is commonly used to represent 256 values on the gray scale (28 = 256 levels of brightness), with 0 for black and 255 for white. A group of eight bits is called a byte. The film image is thus converted into a regular array of pixels that are referenced by scan line number (y coordinate) and pixel count along each line (x coordinate). A typical 23-by-23-cm (9-by-9-in.) aerial photograph digitized with a 50-m sampling interval thus converts into 21 million pixels.The intensity of the digitizer light source is calibrated at the beginning of each scan line to compensate for any fluctuations of the light source. These drum digitizers are fast, efficient, and relatively inexpensive. Color images may be digitized into the values of the component primary colors on three passes through the system using blue, green, and red filters over the detector. A digital record is produced for each primary color, or wavelength band.In flatbed digitizers, the film is mounted on a flat holder that moves in the x and y directions between a fixed light source and a detector. These devices are usually slower and more expensive than drum digitizers, but they are also more precise. Some systems employ reflected light in order to digitize opaque paper prints. Another digitizing system is the flying-spot scanner, which is essentially a modified television camera; it electronically scans the original image in a raster pattern. The resulting voltage signal for each line is digitized and

2.3. Image-Generation Procedure Hard-copy images are generated by film writers Figure 1.2 which operate in reverse fashion to digitizers. Recording film is mounted on a drum. With each rotation, a scan line is exposed on the film by a light source, the intensity of which is modulated by the DNs of the pixels. On completion of each scan line, the carriage advances the light source to commence the next line. The exposed film is developed to produce a transparency from which one can make contact prints and enlargements. Some plotters produce only blackand-white film. To produce color images, the red, green, and blue components are plotted as separate films. Other plotters record data directly onto color film; three passes are made, each with a red, green, or blue filter over the light source.Flying-spot plotters produce an image with a beam that sweeps in a raster pattern across the recording film. This system is also employed to display images on a television screen, which enables the interpreter to view the processed image in real time rather than to wait for the processed data to be plotted onto film, developed, and printed. 2.4. Landsat Images Data acquired by Landsat MSS and TM systems are recorded on computer-compatible tapes (CCTs), which can be read and processed by computers. Spectral bands and other characteristics of the Landsat systems.

International Journal of Research and Reviews in Information Sciences


4, 5, and 6 are multiplied by 2 and band 7 is multiplied by 4 for conversion to eight-bit scales. As mentioned previously, each MSS band consists of 2340 scan lines, each with 3240 pixels, for a total of 7.6 x 106 pixels or 30.4 x 106 pixels for the four bands (Figure 7.3A). Holkenbrink (1978) describes details of the MSS digital format.

Figure 7.3 describes the format of the digital data that make up MSS and TM images. Both systems employ an oscillating mirror that sweeps scan lines that are 185 km in length across the terrain and oriented at right angles to the satellite orbit path. The scanning is a continuous process; the data are subdivided into scenes that cover 185 km along the orbit direction for MSS and 170 km for TM. As described below, the two systems also differ in other characteristics that are important for image processing. 2.5. Landsat MSS Data The ground resolution cell of MSS is a 79-by-79-m square. An image consists of 2340 scan lines, each of which is 185 km long in the scan direction and 79 m wide in the orbit direction . During each east-bound sweep of the scanner mirror, reflected solar energy is separated into the four spectral bands and focused onto detectors to generate a response that varies in amplitude proportionally with the intensity of the reflected energy. A segment of an MSS scan line showing variation in terrain reflectance.The 79-m dimension of the ground resolution cell. The curve showing reflectance recorded as a function of distance along the scan line is called an analog display because the curve is a graphic analogy of the property being measured. Digital displays record information as a series of numbers. Analog data are converted into digital data by sampling the analog display at regular intervals and recording the analog value at each sample point in digital form. The optimum interval between sample points is one that records significant features of the analog signal with the minimum number of samples. For MSS the optimum sample interval was determined to be 57 m. Each sample is converted into a DN that is transmitted to ground receiving stations. Thus each scan line consists of 3240 pixels each covering an area measuring 57 by 79 m. For each pixel, four digital numbers are recorded for reflectance, one each in bands 4, 5, 6, and 7. Spatial resolution of the image is ultimately determined by the 79-by-79-m dimensions of the ground resolution cell; the smaller sample interval does not alter this relationship. Brightness values for bands 4, 5, and 6 are recorded on CCTs using a seven-bit scale (27 = 128 values: 0 to 127); band 7 is recorded on a six-bit scale (26 = 64 values: 0 to 63). Most processing systems employ eight-bit scales; bands

2.6. Landsat TM Data The ground resolution cell of TM is a 30-by-30-m square. An image consists of 5965 scan lines, each of which is 185 km long in the scan direction and 30 m wide in the orbit direction. The analog signal is sampled at 30-m intervals to produce 30-by-30-m pixels. Each scan line consists of 6167 pixels, and each band consists of 34.9 x 106 pixels. The seven bands have a total of 244.3 x 106 pixels, which is more than eight times the data that is contained in an MSS image. Each of the six TM visible and reflected IR bands employs an array of 16 detectors, and unlike MSS, recording occurs during both sweep directions; together these factors allow adequate dwell time for the TM's relatively small ground resolution cells. Because of the lower energy flux radiated in the thermal IR region, band 6 (10.5 to 12.5 m) is recorded by only four detectors, each with a 120-by-120-m ground resolution cell. The cells are resampled into 30-by-30-m pixels to be consistent with the other bands. TM data are recorded on an eight-bit scale, which provides a greater dynamic range than the lower bit scales of MSS. Bernstein and others (1984) have analyzed the performance of the TM system; NASA (1983) gives details of the CCT format for TM data.


Structure of Landsat Digital Images

3.1. Availability of Landsat Digital Data Landsat CCTs of the world are available from EOSAT at a cost of $660 for an MSS scene and $3300 for a TM scene. Many of the Landsat distribution centers listed in Chapter 4 can provide CCTs for their area of coverage. 3.2. Image-Processing Image-processing methods may be grouped into three functional categories; these are defined below together with lists of typical processing routines.  Image restoration compensates for data errors, noise, and geometric distortions introduced during the scanning, recording, and playback operations. o Restoring periodic line dropouts o Restoring periodic line striping o Filtering of random noise o Correcting for atmospheric scattering o Correcting geometric distortions  Image enhancement alters the visual impact that the image has on the interpreter in a fashion that improves the information content. o Contrast enhancement o Intensity, hue, and saturation transformations.

International Journal of Research and Reviews in Information Sciences

o Density slicing o Edge enhancement o Making digital mosaics o Producing synthetic stereo images Information extraction utilizes the decision-making capability of the computer to recognize and classify pixels on the basis of their digital signatures. o Producing principal-component images o Producing ratio images o Multispectral classification o Producing change-detection images

These routines are described and illustrated in the following sections. The routines are illustrated with Landsat examples, but the techniques are equally applicable to other digital-image data sets. A number of additional routines are described in the various reference publications.


Image Restoration

Restoration processes are designed to recognize and compensate for errors, noise, and geometric distortion introduced into the data during the scanning, transmission, and recording processes. The objective is to make the image resemble the original scene. Image restoration is relatively simple because the pixels from each band are processed separately. 4.1. Restoring Periodic Line Dropouts On some MSS images, data from one of the six detectors are missing because of a recording problem. On the CCT every sixth scan line is a string of zeros that plots as a black line on the image. These are called periodic line dropouts. The first step in the restoration process is to calculate the average DN value per scan line for the entire scene. The average DN value for each scan line is then compared with this scene average. Any scan line deviating from the average by more than a designated threshold value is identified as defective. The next step is to replace the defective lines. For each pixel in a defective line, an average DN is calculated using DNs for the corresponding pixel on the preceding and succeeding scan lines. 4.2. Restoring Periodic Line Striping For each spectral band, the detectors (6 for MSS, 16 for TM) were carefully calibrated and matched before the Landsat was launched. With time, however, the response of some detectors may drift to higher or lower levels; as a result every scan line recorded by that detector is brighter or darker than the other lines. The general term for this defect is periodic line striping. The defect is called sixth-line striping for MSS where every sixth line has a brightness offset. Illustrates an MSS image in which the digital numbers recorded by detector number 6 are twice that of the other detectors, causing every sixth scan line to be twice as bright as the scene average. Valid data are present in the defective lines, but must be corrected to match the overall scene. One restoration method is to plot six histograms for the DNs


recorded by each detector and compare these with a histogram for the entire scene. For each detector the mean and standard deviation are adjusted to match values for the entire scene. 4.3. Filtering of Random Noise The periodic line dropouts and striping are forms of nonrandom noise that may be recognized and restored by simple means. Random noise, on the other hand, requires more sophisticated restoration method. Random noise typically occurs as individual pixels with DNs that are much higher or lower than the surrounding pixels. In the image these pixels produce bright and dark spots that mar the image. These spots also interfere with information extraction procedures such as classification. Random-noise pixels may be removed by digital filters. These spots may be eliminated by a moving average filter in the following steps:  Design a filter kernel, which is a two-dimensional array of pixels with an odd number of pixels in both the x and y dimension. The odd-number requirement means that the kernel has a central pixel, which will be modified by the filter operation.  Calculate the average of the nine pixels in the kernel; for the initial location, the average is 43.  If the central pixel deviates from the average DN of the kernel by more than a threshold value (in this example, 30), replace the central pixel by the average value.  Move the kernel to the right by one column of pixels and repeat the operation. The new average is 41. The new central pixel (original value = 40), is within the threshold limit of the new average and remains unchanged. In the third position of the kernel, the central pixel (DN = 90) is replaced by a value of 53.  When the right margin of the kernel reaches the right margin of the pixel array, the kernel returns to the left margin, drops down one row of pixels, and the operation continues until the entire image has been subjected to the moving average filter. This filter concept is also used in many other image processing applications. 4.4. Correcting for Atmospheric Scattering The atmosphere selectively scatters the shorter wavelengths of light. For Landsat MSS images, band 4 (0.5 to 0.6 m) has the highest component of scattered light and band 7 (0.8 to 1.1 m) has the least. Atmospheric scattering produces haze, which results in low image contrast. The contrast ratio of an image is improved by correcting for this effect. Both techniques are based on the fact that band 7 is essentially free of atmospheric effects. This can be verified by examining the DNs corresponding to bodies of clear water and to shadows. Both the water and the shadows have values of either 0 or 1 on band 7. For each pixel the DN in band 7 is plotted against the DN in band 4, and a straight line is fitted through the plot, using a least-squares

International Journal of Research and Reviews in Information Sciences technique. If there were no haze in band 4, the line would pass through the origin; because there is haze, the intercept is offset along the band 4 axis, Haze has an additive effect on scene brightness. To correct the haze effect on band 4, the value of the intercept offset is subtracted from the DN of each band-4 pixel for the entire image. Bands 5 and 6 are also plotted against band 7, and the procedure is repeated. 4.5. Correcting Geometric Distortions During the scanning process, a number of systematic and nonsystematic geometric distortions are introduced into MSS and TM image data. These distortions are corrected during production of the master images, which are remarkably orthogonal. The corrections may not be included in the CCTs, however, and geometric corrections must be applied before plotting images. 4.5.1. Nonsystematic Distortions Nonsystematic distortions not constant because they result from variations in the spacecraft attitude, velocity, and altitude and therefore are not predictable. These distortions must be evaluated from Landsat tracking data or groundcontrol information. Variations in spacecraft velocity cause distortion in the along-track direction only and are known functions of velocity that can be obtained from tracking data. The amount of earth rotation during the 28 sec required to scan an image results in distortion in the scan direction that is a function of spacecraft latitude and orbit. In the correction process, successive groups of scan lines (6 for MSS, 16 for TM) are offset toward the west to compensate for earth rotation, resulting in the parallelogram outline of the image. Variations in attitude (roll, pitch, and yaw) and altitude of the spacecraft cause nonsystematic distortions that must be determined for each image in order to be corrected. The correction process employs geographic features on the image, called ground control points (GCPs), whose positions are known. Intersections of major streams, highways, and airport runways are typical GCPs. Differences between actual GCP locations and their positions in the image are used to determine the geometric transformations required to restore the image. The original pixels are resampled to match the correct geometric coordinates. The various resampling methods are described by Rifman (1973), Goetz and others (1975), and Bernstein and Ferneyhough (1975).


each scan line. The known mirror velocity variations may be used to correct for this effect. Similar corrections are employed for TM images. Scan skew is caused by the forward motion of the spacecraft during the time required for each mirror sweep. The ground swath scanned is not normal to the ground track but is slightly skewed, producing distortion across the scan line. The known velocity of the satellite is used to restore this geometric distortion. 4.6. Image Enhancement Enhancement is the modification of an image to alter its impact on the viewer. Generally enhancement distorts the original digital values; therefore enhancement is not done until the restoration processes are completed. 4.7. Contrast Enhancement The sensitivity range of Landsat TM and MSS detectors was designed to record a wide range of terrain brightness from black basalt plateaus to white sea ice under a wide range of lighting conditions. Few individual scenes have a brightness range that utilizes the full sensitivity range of the MSS and TM detectors. To produce an image with the optimum contrast ratio, it is important to utilize the entire brightness range of the display medium, which is generally film. The accompanying histogram shows the number of pixels that correspond to each DN. The central 92 percent of the histogram has a range of DNs from 49 to 106, which utilizes only 23 percent of the available brightness range [(106 - 49)/256 = 22.3%]. This limited range of brightness values accounts for the low contrast ratio of the original image. Three of the most useful methods of contrast enhancement are described in the following sections.

4.5.2. Systematic Distortions

4.7.1. Linear Contrast Stretch The simplest contrast enhancement is called a linear contrast stretch. A DN value in the low end of the original histogram is assigned to extreme black, and a value at the high end is assigned to extreme white. In this example the lower 4 percent of the pixels (DN < 49) are assigned to black, or DN = 0, and the upper 4 percent (DN > 106) are assigned to white, or DN = 255. The linear contrast stretch greatly improves the contrast of most of the original brightness values, but there is a loss of contrast at the extreme high and low end of DN values. In the northeast portion of the original image, the lower limits of snow caps on volcanoes are clearly defined.

Geometric distortions whose effects are constant and can be predicted in advance are called systematic distortions. Scan skew, cross-track distortion, and variations in scanner mirror velocity belong to this category . Cross-track distortion results from sampling of data along the MSS scan line at regular time intervals that correspond to spatial intervals of 57 m. The length of the ground interval is actually proportional to the tangent of the scan angle and therefore is greater at either margin of the scan line. Tests before Landsat was launched determined that the velocity of the MSS mirror would not be constant from start to finish of each scan line, resulting in minor systematic distortion along

4.7.2. Nonlinear Contrast Stretch Nonlinear contrast enhancement is also possible. This stretch applies the greatest contrast enhancement to the most populated range of brightness values in the original image. The middle range of brightness values are preferentially stretched, which results in maximum contrast in the alluvial deposits around the flanks of the mountains. The northwesttrending fracture pattern is also more strongly enhanced by this stretch. As both histogram and image show, the uniform distribution stretch strongly saturates brightness values at the sparsely populated light and dark tails of the original

International Journal of Research and Reviews in Information Sciences histogram. The resulting loss of contrast in the light and dark ranges is similar to that in the linear contrast stretch but not as severe. A Gaussian stretch is a nonlinear stretch that enhances contrast within the tails of the histogram. This stretch fits the original histogram to a normal distribution curve between the 0 and 255 limits, which improves contrast in the light and dark ranges of the image. The different lava flows are distinguished, and some details within the dry lake are emphasized. This enhancement occurs at the expense of contrast in the middle gray range; the fracture pattern and some of the folds are suppressed in this image. An important step in the process of contrast enhancement is for the user to inspect the original histogram and determine the elements of the scene that are of greatest interest. The user then chooses the optimum stretch for his needs. Experienced operators of image processing systems bypass the histogram examination stage and adjust the brightness and contrast of images that are displayed on a CRT. For some scenes a variety of stretched images are required to display fully the original data. It also bears repeating that contrast enhancement should not be done until other processing is completed, because the stretching distorts the original values of the pixels. 4.8. Intensity, Hue, and Saturation Transformations The additive system of primary colors (red, green, and blue, or RGB system). An alternate approach to color is the intensity, hue, and saturation system (IHS), which is useful because it presents colors more nearly as the human observer perceives them. The IHS system is based on the color sphere in which the vertical axis represents intensity, the radius is saturation, and the circumference is hue. The intensity (I) axis represents brightness variations and ranges from black (0) to white (255); no color is associated with this axis. Hue (H) represents the dominant wavelength of color. Hue values commence with 0 at the midpoint of red tones and increase counter-clockwise around the circumference of the sphere to conclude with 255 adjacent to 0. Saturation (S) represents the purity of color and ranges from 0 at the center of the color sphere to 255 at the circumference. A saturation of 0 represents a completely impure color, in which all wavelengths are equally represented and which the eye will perceive a shade of gray that ranges from white to black depending on intensity. Intermediate values of saturation represent pastel shades, whereas high values represent purer and more intense colors. The range from 0 to 255 is used here for consistency with the Landsat eight-bit scale; any range of values (0 to 100, for example) could be used as IHS coordinates. Buchanan (1979) describes the IHS system in detail. Transform any three bands of data from the RGB system into the IHS system in which the three component images represent intensity, hue, and saturation. (The equations for making this transformation are described shortly.) This IHS transformation was applied to TM bands 1, 2, and 3 of the. The intensity image is dominated by albedo and topography. Sunlit slopes have high intensity values (bright tones), and


shadowed areas have low values (dark tones). The airport runway and water in the Wind River have low values Vegetation has intermediate intensity values, as do most of the rocks. In the hue image, the low DNs used to identify red hues in the sphere cause the Chugwater redbeds to have conspicuous dark tones. Vegetation has intermediate to light gray values assigned to the green hue., Apply a linear contrast stretch to the original saturation image. Note the overall increased brightness of the enhanced image. Also note the improved discrimination between terrain types. Transform the intensity, hue, and enhanced saturation images from the IHS system back into three images of the RGB system. These enhanced RGB images were used to prepare the new color composite image of Plate 10B, which is a significant improvement over the original version in Plate 10A. In the IHS version, note the wide range of colors and improved discrimination between colors. Some bright green fields are distinguishable in vegetated areas that were obscure in the original. The wider range of color tones helps separate rock units. The following description of the IHS transformation is a summary of the work of R. The graphically relationship between the RGB and IHS systems. The corners of the equilateral triangle are located at the position of the red, green, and blue hues. Hue changes in a counterclockwise direction around the triangle, from red (H = 0), to green (H = 1), to blue (H = 2), and again to red (H = 3). Values of saturation are 0 at the center of the triangle and increase to a maximum of 1 at the corners. Any perceived color is described by a unique set of IHS values; in the RGB system, however, different combinations of additive primaries can produce the same color. The IHS values can be derived from RGB values through the transformation equations

for the interval 0 < H < 1, extended to 1 < H < 3. After enhancing the saturation image, the IHS values are converted back into RGB images by inverse equations. The IHS transformation and its inverse are also useful for combining images of different types. For example, a digital radar image could be geometrically registered to the Thermopolis TM data. After the TM bands are transformed into IHS values, the radar image data may be substituted for the intensity image. The new combination (radar, hue, and enhanced saturation) can then be transformed back into an RGB image that incorporates both radar and TM data. 4.9. Density Slicing Density slicing converts the continuous gray tone of an image into a series of density intervals, or slices, each corresponding to a specified digital range. Digital slices may be displayed as separate colors or as areas bounded by

International Journal of Research and Reviews in Information Sciences contour lines. This technique emphasizes subtle gray-scale differences that may be imperceptible to the viewer. 4.10. Edge Enhancement Most interpreters are concerned with recognizing linear features in images. Geologists map faults, joints, and lineaments. Geographers map manmade linear features such as highways and canals. Some linear features occur as narrow lines against a background of contrasting brightness; others are the linear contact between adjacent areas of different brightness. In all cases, linear features are formed by edges. Some edges are marked by pronounced differences in brightness and are readily recognized. More typically, however, edges are marked by subtle brightness differences that may be difficult to recognize. Contrast enhancement may emphasize brightness differences associated with some linear features. This procedure, however, is not specific for linear features because all elements of the scene are enhanced equally, not just the linear elements. Digital filters have been developed specifically to enhance edges in images and fall into two categories: directional and nondirectional. 4.11. Making Digital Mosaics Mosaics of Landsat images may be prepared by matching and splicing together individual images. Differences in contrast and tone between adjacent images cause the checkerboard pattern that is common on many mosaics. This problem can be largely eliminated by preparing mosaics directly from the digital CCTs, as described by Bernstein and Ferneyhough, 1975. Adjacent images are geometrically registered to each other by recognizing ground control points (GCPs) in the regions of overlap. Pixels are then geometrically adjusted to match the desired map projection. The next step is to eliminate from the digital file the duplicate pixels within the areas of overlap. Optimum contrast stretching is then applied to all the pixels, producing a uniform appearance throughout the mosaic.

4.11.1. Producing Synthetic Stereo Images Ground-control points may be used to register Landsat pixel arrays to other digitized data sets, such as topographic maps. This registration causes an elevation value to be associated with each Landsat pixel. With this information the computer can then displace each pixel in a scan line relative to the central pixel of that scan line. Pixels to the west of the central pixel are displaced westward by an amount that is determined by the elevation of the pixel and by its distance from the central pixel. The same procedure determines eastward displacement of pixels east of the central pixel. The resulting image simulates the parallax of an aerial photograph. The principal point is then shifted, and a second image is generated with the parallax characteristics of the overlapping image of a stereo pair.



Findings and Analysis

5.1. Information Extraction Image restoration and enhancement processes utilize computers to provide corrected and improved images for study by human interpreters. The computer makes no decisions in these procedures. However, processes that identify and extract information do utilize the computer's decision-making capability to identify and extract specific pieces of information. A human operator must instruct the computer and must evaluate the significance of the extracted information.

5.1.1. Principal-Component Images For any pixel in a multispectral image, the DN values are commonly highly correlated from band to band. This correlation is illustrated schematically which plots digital numbers for pixels in TM bands 1 and 2. The elongate distribution pattern of the data points indicates that as brightness in band 1 increases, brightness in band 2 also increases. A three-dimensional plot (not illustrated) of three TM bands, such as 1, 2, and 3, would show the data points in an elongate ellipsoid, indicating correlation of the three bands. This correlation means that if the reflectance of a pixel in one band (TM band 2, for example) is known, one can predict the reflectance in adjacent bands (TM bands 1 and 3). The correlation also means that there is much redundancy in a multispectral data set. If this redundancy could be reduced, the amount of data required to describe a multispectral image could be compressed. The principal-components transformation, originally known as the Karhunen-Loeve transformation (Loeve, 1955), is used to compress multispectral data sets by calculating a new coordinate system. For the two bands of data, the transformation defines a new axis (y1) oriented in the long dimension of the distribution and a second axis (Y2) perpendicular to y1. The mathematical operation makes a linear combination of pixel values in the original coordinate system that results in pixel values in the new coordinate system:

where (x1, x2) are the pixel coordinates in the original system; (y1, y2) are the coordinates in the new system; and a11, a12, and a22 are constants note that the range of pixel values for y, is greater than the ranges for either of the original coordinates, x1 or x2 and that the range of values for y2 is relatively small. In summary, the principal-components transformation has several advantages: (1) most of the variance in a multispectral data set is compressed into one or two PC images; (2) noise may be relegated to the less-correlated PC images; and (3) spectral differences between materials may be more apparent in PC images than in individual bands.

International Journal of Research and Reviews in Information Sciences

5.1.2. Ratio Images Ratio images are prepared by dividing the DN in one band by the corresponding DN in another band for each pixel, stretching the resulting value, and plotting the new values as an image. A total of 15 ratio images plus an equal number of inverse ratios (reciprocals of the first 15 ratios) may be prepared from these six original bands. In a ratio image the black and white extremes of the gray scale represent pixels with the greatest difference in reflectivity between the two Spectral bands. The darkest signatures are areas where the denominator of the ratio is greater than the numerator. Conversely the numerator is greater than the denominator for the brightest signatures. Where denominator and numerator are the same, there is no difference between the two bands.

5.1.3. Multispectral Classification For each pixel in a Landsat MSS or TM image, the spectral brightness is recorded for four or seven different wavelength bands respectively. A pixel may be characterized by its spectral signature, which is determined by the relative reflectance in the different wavelength bands. Multispectral classification is an information-extraction process that analyzes these spectral signatures and then assigns pixels to categories based on similar signatures. There are two major approaches to multispectral classification:  Supervised classification. The analyst defines on the image a small area, called a training site, which is representative of each terrain category, or class. Spectral values for each pixel in a training site are used to define the decision space for that class. After the clusters for each training site are defined, the computer then classifies all the remaining pixels in the scene.  Unsupervised classification. The computer separates the pixels into classes with no direction from the analyst.

5.1.4. Change-Detection Images Change-detection images provide information about seasonal or other changes. The information is extracted by comparing two or more images of an area that were acquired at different times. The first step is to register the images using corresponding ground-control points. Following registration, the digital numbers of one image are subtracted from those of an image acquired earlier or later. The resulting values for each pixel will be positive, negative, or zero; the latter indicates no change. The next step is to plot these values as an image in which a neutral gray tone represents zero. Black and white tones represent the maximum negative and positive differences respectively. Contrast stretching is employed to emphasize the differences.

5.1.5. Processing Session The image analyst begins an interactive session by typing


appropriate commands on the keyboard of the control terminal to actuate the system. The black-and-white screen of the control terminal displays the status of processing throughout the session. An operator in the computer facility loads the appropriate Landsat CCT onto the tape drive, which reads and transfers the data to the 300-Mbyte disk. The disk can store up to 10 Landsat MSS scenes or a single TM scene. During this data transfer phase, many systems automatically reformat the data to correct for systematic geometric distortion and to resample MSS pixels into a square (79-by-79-m) format. The analyst is now ready to view the image on the color display unit, which is typically a television screen that displays 512 horizontal scan lines, each with 512 pixels. A resampled MSS image band is 2340 lines long by 3240 pixels wide, and a TM image is 5667 lines long by 6167 pixels wide Figure 1.3 To display the entire area covered by these images on the 512-by-512 color monitor, the data are subsampled: only every fifth line and fifth pixel of MSS data are shown, and for TM only every thirteenth line and pixel are shown. Despite the reduction of data, sufficient detail is visible on the screen to select a subscene for processing. The analyst has the option of displaying any combination of three bands on any combination of screen colors. Generally the green and red bands and a reflected IR band of data are displayed on the blue, green, and red phosphors of the screen to produce an IR color image. Images are traditionally oriented with north at the top of the screen. The analyst selects a subscene by using the cursor, which is a bright cross that is moved across the screen with the aid of the cursor control. The cursor control is a track ball that the analyst manipulates to shift the cursor on the screen. For each position of the cursor, its location in the image is displayed on the screen by numbers showing the corresponding scan line and pixel. The analyst positions the cursor at the northwest corner of the desired 512-by-512 subscene and types a command that identifies this locality. The analyst then positions the cursor at the southeast corner of the subscene and enters the command. The system then selects this subscene from the storage disk, transfers it to the local disk in the image processing unit, and displays the 512-by-512 array of pixels on the color monitor. With this display the analyst is ready to begin a session of interactive image processing. The analyst specifies the desired digital processing, views the results on the monitor in near real time, and makes further adjustments. The analyst continues to interact with the data via the cursor and keyboard until obtaining the desired results.

5.1.6. Image-Processing Systems The author has conducted an informal survey of most of the commercial image-processing systems that are available at a wide range of prices and capabilities. The survey included the price of software and hardware. Price does not include the host computer, tape drive, and storage disks because many facilities already have this equipment, which can be shared for image processing.

International Journal of Research and Reviews in Information Sciences


Information extraction to utilize the decisionmaking capability of the computer to recognize and classify pixels on the basis of their digital signatures. However, it should also be noted that technical advances in software and hardware are steadily increasing the volume and complexity of the processing that can be performed, often at a reduced unit cost.

References [1]

[2] [3]

The systems belong to two major categories: (1) those costing less than $50,000, listed in Table1.1; and (2) those costing more than $50,000, listed in Table 1.2.

[4] [5]





[10] [11] [12] [13]

The less expensive systems are hosted on personal computers or minicomputers. Because of their small memories, most of these systems can process only small subscenes of Landsat images but are useful for teaching purposes. The more expensive systems are hosted on larger computers and can process entire Landsat scenes in the batch-processing mode. These systems can be connected to even more powerful computers for faster processing.



Digital image processing has been demonstrated in Landsat images that are available in digital form. It is emphasized, however, that any image can be converted into a digital format and processed in similar fashion. The three major functional categories of image processing are  Image restoration to compensate for data errors, noise, and geometric distortions introduced during the scanning, recording, and playback operations.  Image enhancement to alter the visual impact that the image has on the interpreter, in a fashion that improves the information content.





[18] [19] [20] [21] [22]

Batson, R. M., K. Edwards, and E. M. Eliason, 1976, Synthetic stereo and Landsat pictures: Photogrammetric Engineering, v. 42, p. 12791284. Bernstein, R., and D. G. Ferneyhough, 1975, Digital image processing: Photogrammetric Engineering, v. 41, p. 1465-1476. Bernstein, R., J. B. Lottspiech, H. J. Myers, H. G. Kolsky, and R. D. Lees, 1984, Analysis and processing of Landsat 4 sensor data using advanced image processing techniques and technologies: IEEE Transactions on Geoscience and Remote Sensing, v. GE-22, p. 192-221. Bryant. M., 1974, Digital image processing: Optronics International Publication 146, p. 44-54 Chelmsford, Mass. Buchanan, M. D., 1979, Effective utilization of color in multidimensional data presentation: Proceedings of the Society of PhotoOptical Engineers, v. 199, p. 9-19. Chavez, P. S., 1975, Atmospheric, solar, and MTF corrections for ERTS digital imagery: American Society of Photogrammetry, Proceedings of Annual Meeting in Phoenix, Ariz. P. 459-479 Goetz, A. F. H., and others, 1975, Application of ERTS images and image processing to regional geologic problems and geologic mapping in northern Arizona: Jet Propulsion Laboratory Technical Report 32-1597, Pasadena, Calif. Haralick, R. M., 1984, Digital step edges from zero crossing of second directional filters: IEEE Transactions on Pattern Analysis and Machine Intelligence, v. PAMI-6, p. 58-68. Holkenbrink, 1978, Manual on characteristics of Landsat computercompatible tapes produced by the EROS Data Center digital image processing system: U.S. Geological Survey, EROS Data Center, Sioux Falls, S.D. p. 63-93 Loeve, M., 1955, Probability theory: D. van Nostrand Company, Princeton, N.J. p. 66s, 116s. Moik, H., 1980, Digital processing of remotely sensed images: NASA SP no. 431, Washington, D.C. NASA, 1983, Thematic mapper computer compatible tape: NASA Goddard Space Flight Center Document LSD-ICD-105, Greenbelt, Md. Rifman, S. S., 1973, Digital rectification of ERTS multispectral imagery: Symposium on Significant Results Obtained from ERTS-1, NASA SP327, p. 1131-1142. Rifman, S. S., and others, 1975, Experimental study of application of digital image processing techniques to Landsat data: TRW Systems Group Report 26232-6004-TU-00 for NASA Goddard Space Flight Center Contract NAS 5-20085, Greenbelt, Md. Short, N. M., and L. M. Stuart, 1982, The Heat Capacity Mapping Mission (HCMM) anthology: NASA SP p. 465, U.S. Government Printing Office, Washington, D.C. Simpson, C. J., J. F. Huntington, J. Teishman, and A. A. Green, 1980, A study of the Pine Creek geosyncline using integrated Landsat and aeromagnetic data in Ferguson, J., and A. B. Goleby, eds., International Uranium Symposium on the Pine Creek Geosyncline: International Atomic Energy Commission Proceedings, Vienna, Austria. Soha, J. M., A. R. Gillespie, M. J. Abrams, and D. P. Madura, 1976, Computer techniques for geological applications: Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications, Jet Propulsion Laboratory SP 43-30, p. 4.1-4.21, Pasadena, Calif. Swain, P. H., and S. M. Davis, 1978, Remote sensing--the quantitative approach: McGraw-Hill Book Co., N.Y. p. 696-697. Andrews, H. C., and B. R. Hunt, 1977, Digital image restoration: Prentice-Hall, Englewood Cliffs, N.J. p. 238 Castleman, K. R., 1977, Digital image processing: Prentice-Hall, Englewood Cliffs, N.J.p. 75-81 Gonzalez, R. C., and P. Wintz, 1977, Digital image processing: AddisonWesley Publishing Co., Reading, Mass. p. 431 Pratt, W. K., 1977, Digital image processing: John Wiley & Sons, N.Y. p. 124

International Journal of Research and Reviews in Information Sciences [23] Rosenfeld, A., and A. C. Kak, 1982, Digital picture processing, second edition: Academic Press, Orlando, Fla. P. xii+403.

Mir Mohammad Azad received PhD in Computer Science from Golden State University. Received Masters of Computer Application (MCA) Degree from Bharath Institute of Higher Education and Research Deemed University (Presently Bharath University) and Bachelor of Computer Application (BCA) Degree from Bangalore University, India, Presently Working as an Assistant Professor and Departmental Head of Computer Science and Engineering, Shanto Mariam University of Creative Technology, Dhaka, Bangladesh. His areas of interest include E-commerce, Digital Image, Networking and Wireless communication.

Md Mahedi Hasan received Master of Business Administration (M.B.A.) Major in MIS, Degree from Sikkim Mannipal University Of Health Medical & Technological Science and Bachelor of Computer Application (BCA) Degree from Bangalore University, India, Presently Working as a Lecturer, Prime University, Dhaka, Bangladesh. His areas of interest include E-commerce, Digital Image, and MIS.