Three-dimensional embedded defect detection and ...

2 downloads 0 Views 2MB Size Report
as larger blobs in bright field illumination would have appropriately larger .... corresponding binary image that isolates potential defect areas from clear areas.
Three-dimensional embedded defect detection and localization in a semi-transparent medium Gil Abramovich, Christopher Nafis, Yana Williams, Kevin Harding, and Eric Tkaczyk GE Global Research Niskayuna, NY 12309 The fabrication of new optical materials has many challenges that suggest the need for new metrology tools. To this purpose, the authors designed a system for localizing 10 micron embedded defects in a 10-millimeter thick semitransparent medium. The system, comprising a single camera and a motion system, uses a combination of brightfield and darkfield illumination. This paper describes the optical design and algorithm tradeoffs used to reach the desired detection and measurement characteristics using stereo photogrammetry and parallel-camera stereoscopic matching. Initial experiment results concerning defect detection and positioning, as well as analysis of computational complexity of a complete wafer inspection are presented. We concluded that parallel camera stereoscopic matching combined with darkfield illumination provides the most compatible solution to the 3D defect detection and positioning requirement, detecting 10 micron defects at a positioning accuracy of better than +/- 0.5 millimeters and at a speed of less than 3 minutes per part. Keywords: inspection, inclusions, 3D mapping

1. INTRODUCTION The detection of embedded defects in transparent and semi-transparent material is important in a larger number of applications today.1-3 In the display area alone, the increasing use of flat panel displays for phones, notebook computers and televisions is an exploding and economically important industry today. There is perhaps an even wider selection of parts that are semi-transparent such as diffuser displays. What all transparent parts have in common is they treat light in accordance with the optical laws of reflection and refraction. That is, a beam of light incident on a point on a specular part will reflect off in a direction of angle equal in value to the incidence angle, as measured from the local normal to the surface, though to the opposite side of the normal. If the light transmits through the part, then the light will bend at each interface according to Snell's law: (index of incident media) x sin I = (index of media entered) x sin R

(1)

where: I is the incidence angle measured from the normal, and R is the refracted angle of the light in the new media entered. When there is some anomaly in either the surface or interior (of a transparent part), the way in which the light beam reflects or refracts will change. It is because of this change that we see the defect in the first place. For example, if the faceplate of a notebook computer screen has a small bubble in the material, the letter behind that bubble will be difficult to see. We need not even be able to see the defect in the material to see its effect. In the example above, the presence of that defect will degrade the customer satisfaction, and ultimately the value of the product just because of a bad spot. The actual deformation in the panel can be on the order of a few microns, yet still having an effect that is clearly visible. No one decision is necessarily the deciding factor in the "perceived quality" of a screen or panel. However, in the previous example the loss of visibility of a character on the computer screen may be devastating. Though even with the computer screen, if a spot or two is only slightly distorted, might the user not really notice it or just get used to it. The final analysis of what features or distortions are important, as seen through a transparent element is left up to the user. By convention, the human eye can detect a 25-micron feature at arms length. For imaging based systems such as in photo-imaging or low power microscopy, defects as small as 10 micron can still easily be optically resolved.

Optical Inspection and Metrology for Non-Optics Industries, edited by Peisen S. Huang, Toru Yoshizawa, Kevin G. Harding, Proc. of SPIE Vol. 7432, 74321C · © 2009 SPIE · CCC code: 0277-786X/09/$18 · doi: 10.1117/12.830444 Proc. of SPIE Vol. 7432 74321C-1

For high value materials in use in a variety of display, medical, or electro-optical applications, it is not sufficient to just identify the presence of these defects, they must also be localized. The requirements are the detection of 10-micron inclusions in size, with localization accuracy of about 1 millimeter. The localization specification is driven by the desire to affect the cutting of the material, and as such does not need to be highly precise. For the purpose of inspection, the defects themselves need only be detected, not resolved. Just as we can see stars in the sky that are much smaller in subtended angle than we can resolve, defects of 10 micron size can reliably be detected with image resolution much larger than the 10 micron feature. What will be key to this detection is a high signal to noise ratio versus the size of the usable image blur (or pixel size for a system not optically limited). The remainder of this paper will address the tradeoffs of the optical and analytical considerations for doing this type of inspection. As with other systems inspecting semi-transparent material, there are reasons why a particular feature may show up better in dark field or bright field illumination. In dark field illumination, the light is directed through the sample, but not directly into the viewing system (see Figure 1). Features in the sample that scatter or otherwise redirect the light toward the viewing system will show up as bright spots on a dark field (thus the name). This method typically works well for small features that may be below the resolution limit of the system. For larger features, a dark field approach will only outline the edges and so may appear or be counted as more than one defect. For the larger and subtle defects without well defined edges, a lighting which directs the light into the viewing system will show shadows that will better define the nature of the larger defects (see Figure 2). The authors chose to make both types of lighting available in this system for these reasons. The methods of bright field and dark field illumination are well known and will not be further reviewed in this paper.4 Sample Camera

Lights

Figure 1. Diagram of a basic dark field illumination configuration. The light does not go directly into the camera.

Sample Light

Camera

Figure 2. Diagram of a basic bright field illumination configuration. Defects create shadows.

The signal to noise ratio is calculated according to the following equations:

⎛ Peak − mean( noise) ⎞ ⎟⎟ SNR = abs⎜⎜ ⎝ max(noise) − mean(noise) ⎠

(2)

where the signal and the noise are extracted from line profiles crossing the inclusion. They are depicted in Figure 3 for dark field and brightfield illumination.

Proc. of SPIE Vol. 7432 74321C-2

Figure 3. Signal-to-noise ratio is measured from a line profile crossing a defect.

2. TECHNICAL DISCUSSION 2.1 Optical system resolution For a reliable detection above noise levels, the signal of interest should be at least a factor of 3 above noise (just to be above Nyquist sampling to be statistically significant) and ideally 10 times above. Taking the 3X rule, this means that for an image with 64:1 signal to noise ratio (6 bits out of 8), the area of the blur could be about 20 times larger (64/3) than the actual inclusion size. So for a 10-micron inclusion, the blur size could be 45 to 50 microns (area of 10 micron feature times 20+). However, to detect such defects at the subpixel level means either the defects need to be contained within one pixel, per the assumption above, or we assume the defect may overlap as many as 4 pixels in the worst case. In this worst case situation, the 45 to 50 micron usable blur must be reduced by a factor of 4 in area, or about 2X in diameter making for a 25-micron blur of the effective sampling size. This 25-micron sampling is still larger than an actual 10-micron defect and certainly not able to resolve or separate two closely spaced 10-micron defects. For a 80mm sample, this blur size would imply about 1800 resolvable elements across the field (80-mm divided by 25 microns times 3X to be above noise levels), or a maximum detector size of about 3600 pixels to not lose significant spot contrast to sampling. This approach assumes the image does have low noise and inclusions are high contrast. This is consistent with the results of inclusion detection using dark field illumination in our testing. Other defects, which appear as larger blobs in bright field illumination would have appropriately larger blur areas, but the detection of these other features of interest will need to be verified. The other factor which makes this imaging challenging is to maintain the maximum amount of the detector sample in focus at one time in order to perform 3D localization based upon stereo. For a 45 micron blur size, the usable depth range for a lens set at f/16, assuming a 3X demagnification (inverse of magnification) of the field, would be about: 4 X blur size X f-number on lens X demag ~ 9 millimeters

(3)

which is about the thickness of the samples we expect to inspect. Clearly, this is working near the limit of what we expect to be able to do, but is not unrealistic. The imaging limit for this setup would be about f/16 x 3 or f/45 at the sample, which gives about a 40 to 45 micron diffraction limit, consistent with the blur size. A lower contrast defect would potentially require the use of a shorter depth range. With a shorter depth range, the defects would be detected looking at the sample in 2 or more steps through the depth. 2.2 Position mapping optical layout There are several methods to localize the detected defects within the volume of the material. The first method most commonly used in mapping defects in transparent materials is the use of a shallow depth-of-focus range, which is then

Proc. of SPIE Vol. 7432 74321C-3

translated through the sample. In the extreme case, this is the basis of confocal imaging. Each depth within the sample is scanned, then the system is focused to a different depth and the sample is scanned again, one step at a time. Although this method can be very effective at localizing defects to within plus or minus one depth step, it is also very time consuming. The depth measurement in this approach is dependent upon the accuracy of the depth stepping stage. The alternate approach to obtaining 3D localization of defects is to use a stereo approach and triangulation.5-8 In the triangulation approach, each defect found is viewed from 2 or more angles, as shown in Figure 4, and their depth is given by: Depth ~ (lens shift) / (tan (angle 1) + tan (angle 2)

(4)

This is the same type of triangulation used in land surveying to locate points. The realization of this approach with a camera is either by taking one camera and moving it, or using 2 or more cameras. But to perform this mapping implies some limitations based upon the resolution and depth-of-field calculations made above. To achieve the best speed of measurement suggests using this triangulation method on all defects at all depths at the same time.

Figure 4. Basic method of localization by means of triangulation.

The design approach described above suggests that it would be possible to look at an 80mm sample, and map defects through the depth, but to do so would require using the full depth-of-field of the optical system. Using the full depth-offield (DOF) of the optics would require that the imaging plane of the optical system be kept parallel to the plane of the sample, with the multi-view angle coming from offsets of the field-of-view as in Figure 5 (Left). For this type of sample which is much wider than it is thick, this is a very reasonable arrangement that does not waste measurement view due to the small overlap regions of the views as is done with conventional photogrammetry methods (see Figure 5 (Right)). Figure 6 demonstrates the effect of tilt using these methods on blur. Small defects would disappear from some of the images, resulting in lost 3D localization.

Depth-of-Field

Wide overlap DOF

Limited overlap of DOF

Figure 5. Left: multi-view Image from lens or sample translations. Each lens can see a point from a slightly different angle. Right: Traditional photogrammetry system with small overlap region for field of view.

Proc. of SPIE Vol. 7432 74321C-4

Figure 6. Left: multiple views of Traditional photogrammetry system. The image shows a representation of a single camera and rotating target, which is identical to a stationary target and multiple tilted cameras. (screen shot from PhotoModeler® 3D software). Right: The tilt creates defocus, which erases images of small defects, for which correspondence cannot take place

In our selected translated-camera stereoscopic solution, depth-of-field is limited by the variations in optical path within the field-of-view because at the edge of the large field there is more solid material in the optical path. Camera

Sample

Depth

Figure 7. – Field-dependent optical path variations in a semi-transparent solid

In Figure 7, the optical path difference in length between the tilted path inside the material and that at the center of the field is:

L1 =

1 ⎞ ⎛ 1 Depth⎜ − 1⎟ n ⎝ Cosα ⎠

(5)

where D is the depth, α is the half cone angle, and n is the refractive index of the semi transparent medium. For a depth of 5 millimeters, half angle of 15 degrees, and a refractive index of 1.5, the optical path difference would be 0.11 millimeters. This value should be reduced from the calculated depth-of-field. This depth error is insignificant for shallow material, but becomes significant for thick material (such as 100mm glass block). Also, if more than one scan (each at a different depth) is performed in each part, the corrections of the calculated depth-of-field are depth specific. While positioning errors can be corrected, some defects may become out of focus. While flat field lenses are commercially available, they are designed for imaging through air. Custom lenses can be made for flat field scanning inside the material. 2.3 Field-of-view determination The determination of the usable field-of-view is related to both 2D imaging and 3D capture constraints. This is the most critical constraint for both brightfield and darkfield imaging for detecting small inclusions, desirably 10 micron. Another

Proc. of SPIE Vol. 7432 74321C-5

constraint is the positioning accuracy requirement of 1 mm, which will be derived from the 3D stereoscopic measurements, and is expected to be a less imposing constraint. The field-of-view is derived from optimizing the inclusion detectability, which is correlated with the system resolution. The controlling phenomena are defocus, diffraction, camera resolution, camera noise, physical noise in the sample, and illumination artifacts. Defocus plays a critical role in this tradeoff. We have already determined to perform 3D defect localization using stereo matching. This requires every inclusion within the sample that is in the field-of-view to be in focus. Reduction in focus quality potentially removes small inclusions from being detectable. The depth-of-field is calculated from the allowable circle of confusion. Using the measured signal-to-noise ratio (SNR) for both brightfield and darkfield imaging, we calculated a maximum circle of confusion of 38 microns to detect 10-micron inclusions. Overall, for this magnification, the usable depth-of-field is then about 5 millimeters. In order to obtain the desired resolution of localization of 1 millimeter, we use the equation: Z -resolution = blur/( tan (view1) + tan (view2)) or tan (view) = blur/(Z-resolution)x2 or angle ~ 1.5 degrees

(6) (7)

So, clearly, the offset between imaging does not need to be very large. An angle between images of even 3 to 5 degrees would be plenty to provide localization of a 45 micron blur region to within less than +/- 0.5 millimeters. This suggests that using a detector a little larger than the minimum size, a series of images can be taken as the sample is translated by small amounts. For example, with a sample standoff of 150 millimeters (about 6 inches), a 5-degree angle shift would require a movement of only about 13 millimeters or about half an inch. Two such movements of the sample would require a move of about 25 millimeters or about 1 inch. For the 80 millimeter sample, this would mean having a field-ofview of about 105 millimeters total, or 4 views of about 53 millimeters (about 2 inches) to provide a 3 view stereo set. Three views insure that each point defect should be visible in at least 2 views, in case a defect is occluded by another defect in one view. A 53-millimeter field-of-view uses a 13-micron pixel size in the image given a 4008 element detector. The actual image resolution is limited by diffraction to around 30 microns in this case. The sample in this case can be covered in 4 areas, which each area taking at least 3 images. Capturing the 3 high-resolution images (10-16 Megapixel, 16 bit) can take 3 seconds, with another 3 seconds to move to the next area, depending on the camera. So, a reasonable estimate for a full sample scan would be 24 to 30 seconds by this method. 2.4 Stereo calculation of defect localization Figure 8 shows the steps implemented in the processing of multi-view data in order to map inclusions.8 Two or more images are first acquired from shifted positions. Then a threshold test is applied to each of the images to create a corresponding binary image that isolates potential defect areas from clear areas. A “blob detector” (i.e. cluster algorithm) works by analyzing the binary image. For each pixel identified in the threshold test as a potential defect area, the algorithm assigns all connected neighbors of that pixel to the current cluster and continues to follow furtherconnected neighbors until the cluster is fully captured. The “blob detector” algorithm creates a list of all such pixel clusters found in the binary image. Then the pixel cluster size (i.e. number of pixels), the location and eccentricity are characterized. Finally, the cluster list is trimmed so as to only include pixels of a certain size range (4-30 pixels). The list is also trimmed to exclude clusters with high eccentricity. This eliminates potential line defects. So far all images have been analyzed individually. For the next step, the locations of clusters are correlated between images. Under ideal conditions where there are a sparse number of clusters there is no overlap of each cluster identified in one view, corresponds to exactly one cluster identified in another view. Figure 9 illustrates the principle of parallax for extracting depth information. For image pairs with a shift in the X-direction, the cluster maps are compared for potential coincidence at a shift in X. The depth extracted for potential coincidence must be consistent with the physical dimensions of the similarly for image pairs with a shift in the Y-direction.

Proc. of SPIE Vol. 7432 74321C-6

Multiple shifted images area acquired Images are thresholded (i.e.. Turned into binary images) Blob detector is run Blobs of a certain range are kept (4-30 pixels) Blobs of a certain eccentricity are kept (eliminates lines) Blobs along x shifted axis are potentially correlated Blobs along y shifted axis are potentially correlated Look for candidates that show in x and y shifted images Depth is calculated by stereo Defect locations reported in die cutter coordinate system

Figure 8. Steps in the algorithm for mapping of defects.

Shifted Source Spots

d

Defect

h CZT

Image plane

X Sx= D(h-d)/d

Figure 9. Simplified diagram of inclusion projection to two camera locations for two shifted source locations.

3. EXPERIMENTAL RESULTS We built the system described above, using a 16 megapixel camera. A diagram of this system is shown in Figure 10, and a photograph in Figure 11. A high quality 50 mm focal length imaging lens was specified with a flexible focus ring. The lens was selected in combination with the entire system, i.e., with the condenser lens and illumination position and specification so that the brightfield illuminator creates an image on the entrance pupil of the camera lens, thus eliminating traces of the illuminator image on the camera sensor. A computer-driven microscope stage enables scanning and depth control. The illuminators are LED-array projectors with wide-angle lenses to fill the entire condenser lens pair and the sample area. The darkfield projectors are similar to the brightfield projector except that they are off-axis, so that their beams hit the inclusions but miss the sensor.

Proc. of SPIE Vol. 7432 74321C-7

Optional folding mirror

Brightfield source Darkfield lights BF condenser

Microscope XYZ stage Camera Folding mirror Optional Z stage

Figure 10. Diagram of the combined brightfield, darkfield, stereo defect inspection system.

Bright Field Light

Dark Field Lights

Condenser Lenses

Camera XYZ Stage

Figure 11. Picture of the 3D volume defect mapping system.

Proc. of SPIE Vol. 7432 74321C-8

1 mm

10 mm

Figure 12. Top: 3D defect map overlaid on a sample image showing defects at different depths constructed from 3 images (positions of the second and third source images were shifted here by 5 mm up and 5 mm right respectively relative to the first image); Bottom: A side view of a 3D map.

We used this system to take a series of 3 images, moving the sample at fixed intervals. The resulting 3D mapping is shown in Figure 12. The system was able to detect defects as small as 10 microns in size. Not all defects had the same visibility, with some showing up better in darkfield, such as small particle inclusions, and others showing better in bright field lighting. Many of the features that were highlighted were in fact on the top or bottom surface of the sample. Translations between the images of up to 1/3 of the sample size were tested successfully with minor reduction of detectability relative to smaller 5 mm translations. This will be addressed through evaluation and modification (as needed) of the image analysis procedures and the capture system. Better polishing of the samples would mitigate much of this problem, even though the surface did have an optical quality finish already. It was possible to isolated out surface defects using the depth mapping, but in some cases surface scratches made it difficult to see small internal inclusions consistently in all images.

4. CONCLUSIONS This paper describes the approach and tradeoffs for the development of a system able to map small defects within a semitransparent object. The approach described uses multi-view stereo methods with lateral shifts to create a 3D map of point defects that might be found in a window-like sample. Issues of spatial detectability, resolution of localization, and the speed of mapping defects are traded off to get the best overall performance. The system was built and demonstrated to have the ability to detect features less than 10 microns in size and localize them to better than 1 millimeter within a 10 millimeter thick sample at speeds much better than confocal type scanning methods. Further image analysis is needed to increase the area where precise 3D positioning is feasible.

1 mm

Proc. of SPIE Vol. 7432 74321C-9

ACKNOWLEDGMENTS This work was funded by the US Dept. of Homeland Security, Domestic Nuclear Detection Office, through contract number HSHQDC-08-C-00174.

REFERENCES [1] Malacara, D; [Optical Shop Testing], Wiley Interscience (1978). [2] Sinha, M., Rangarajan, P., Watkins, V., Gardner, M., and Harding, K. “Scratch Visibility: What People See When They Look At Scratches,” Proc. International Coatings for Plastics Symposium (2004). [3] Harding, K; “Optical flaw enhancement methods for glass and specular parts,” Proc. Robots and Vision '96 (1997). [4] Harding, K; “Machine Vision Lighting,” [The Encyclopedia of Optical Engineering]. Marcel Dekker (2000). [5] Hobrough, G. and Hobrough, T., “Stereopsis for robots by iterative stereo image matching,” Proc. SPIE 449, 94 (1983). [6] Boyer, K. L. and Sotak, Jr., G. E, “Structural Stereo Matching of Laplacian-of-Gaussian Contour Segments for 3-D Perception,” Proc. SPIE 1005, 219 (1988). [7] Wang, Z. F., Ohnishi, N., “Intensity-based stereo vision: from 3D to 3D,” Proc. SPIE 2354, 434 (1994). [8] Parent, A. J. and Wang, P. S. P., “Distance recovery of 3D objects from stereo images using Hough transform,” Proc. SPIE 2354, 348 (1994).

Proc. of SPIE Vol. 7432 74321C-10

Suggest Documents