Republic of Iraq Ministry of Higher Education and Scientific Research University of Technology Dept. of Production Engineering and Metallurgy.
SYSTEM DEVELOPMENT OF IMAGE BASED 3D SURFACE RECONSTRUCTION A Dissertation Submitted to the Department of Production Engineering and Metallurgy University of Technology In a Partial Fulfillment of Requirements for the Degree of Doctor of Philosophy in Production Engineering By Ahmed Abdullah Ebraheem Al-Jaf (B.Sc. 1984 and M.Sc. In Production Engineerin-1997)
Supervised by Asst. Prof. Dr. Tahseen F. Abbas
AD 2013
1433 AH
Dedication
To the spirits of my Parents To my Wife To my Daughter Fotton To my Sons: Aram And Yousif
Supervisor Certification I certify that this dissertation entitled "System Development of Image Based 3D Surface Reconstruction" was prepared by ( Ahmed Abdullah Ebraheem AlJaf) under my supervision at the Department of Production Engineering and Metallurgy, University of Technology, Baghdad , Iraq in a partial fulfillment of requirements for the Degree of Doctor of Philosophy in Production Engineering.
Signature: Name
: Asst. Professor Dr. Tahseen F. Abbas
Date
:
/ / 2013
In view of the available recommendation I forward this dissertation for debate by the examining Committee.
Signature: Name
: Asst. Professor Dr. Sami A. Ajeel
Deputy Head of Department for Scientific and Postgraduate Affairs Date
:
/ / 2013
Examining Committee Certification We, the examining committee, certify that we have read this dissertation entitled "System Development of Image Based 3D Surface Reconstruction" we have examined the student ( Ahmed Abdullah Ebraheem Al-Jaf ) in its content and what is related to it, and in our opinion it meets the standard of a dissertation for degree of Doctorates of Philosophy in Production Eengineering. Signature : Name
: Dr. Imad Hussain Merza Al-Hussaini
Title
: Professor( Chairman)
Date
:
/ / 2013
Signature :
Signature :
Name
: Dr.Ali Y. Fattah
Name
: Dr. Wissam Kadhim Hamdan
Title
: Asst. Prof. (Member)
Title
: Asst. Prof. (Member)
Date
:
Date
:
/ / 2013
/ / 2013
Signature :
Signature :
Name
: Dr. Ali Abbar khlef
Name
: Dr. Ahmed Abdul Hussain Ali
Title
: Asst. Prof. (Member)
Title
: Asst. Prof. (Member)
Date
:
Date
:
/ / 2013
/ / 2013
Signature : Name
: Dr. Tahseen F. Abbas
Title
: Asst. Prof. (Supervisor)
Date
:
/ / 2013
Approved by Production Engineering and Metallurgy Department Signature : Name
: Dr. Ahmed Ali Akber
Title
: Asst. Prof.( Head of Department)
Date
:
/ / 2013
ACKNOWLEDGEMENTS I am greatly indebted to my supervisor Dr. Tahseen F. Abbas for his valuable guidance , encouragement and help throughout this work so I have to thank him for his efforts. My appreciation goes to the Head and staff of the Department of Production and Metallurgy Engineering. I would like to thank also Mr. Abdulwahab Al-Ani in Alanbar Technical Institute for his help. My thanks are due to my friends Hussain Bahhon , Emad Ali, Mustafa Khalil and Aqeel Sabree for their supports. Finally I express my heartfelt gratitude to my members of family for their patience and their support during my study.
Ahmed Al-Jaf
Abstract
There is difficulty in reconstruction the three-dimensional engineering surfaces from two-dimensional images of the original surface. The objective of this work is to contribute to facilitating the possibility of reconstruction of the three-dimensional sculptured surfaces by processing the two-dimensional images. An image-based slicing technique and system for 3D sculptured surface reconstruction of a real object is proposed, developed and implemented. The developed system is relatively low cost and easy to implement and has been tested on five different sculptured surfaces. The test surface is placed in a cubical liquid container filled with a dark liquid (dark blue soluble ink ) and the intersections between the liquid level and the real sample are generated by raising the movable base in step by step manner , and for each step an image has been captured, the image is taken at every up word step using a single digital camera that is positioned at a convenient height and perpendicular to the center of the movable base. The depth of each cross section can be directly obtained for each of the captured image . Images were processed using MATLAB version 7.12.0635 (R2011a) package where they converted into binary data to find the edges and then redraw these edges and assemble them using the AutoCAD 2013 after giving elevation value for each slice and then these slices connection are rendered to reconstruct the sculptured surface. I
It’s found through reconstructing the 3D engineering sculptured surfaces
that the general trend largely corresponds to the shapes of the
original models for all five tested samples. To increase the accuracy of the correspondence between the surfaces that were reconstructed and the original surfaces of the samples, three methods have been used, a spline, nearest interpolation and least square fitting after the division of the surface to grid of 50x50 then these surfaces reconstructed. It has been found that all the mathematical interpolating and fitting greatly enhance the final shape but the least square fitting gives greater accuracy than other two methods. The enhanced data using least square fitting of one model has been manipulated
to generate the toolpath and
machined using CNC milling machine.
II
one sculptured surface is
Contents Title
Page No.
Abstract
I
Contents
III
List of Figures
X
List of Tables
XIX
List of Symbols
XX CHAPTER ONE GENERAL INTRODUCTION
1-1 Overview
1
1- 2 Surfaces
1
1-2-1 Sculptured Surface
2
1-3 The aim of dissertation
4
1-4 Matlab package
5
1-5 Dissertation organization
5
CHAPTER TWO LITERATURE REVIEW 2-1 Introduction
7
2-2 Summary
11
III
Page No.
Title CHAPTER THREE
THEORETICAL CONSIDERATIONS OF 3D MODEL RECONSTRUCTION 3-1 Introduction
13
3-2 Least Squares Fitting Method
13
3-2-1 Surface Fitting by Using 2D Least Squares
14
3-2-2 Linear 2D Least Squares
14
3-2-3 Quadratic 2D Least Squares
16
3-2-4 Cubic 2D Least Squares
17
3-3 Spline Interpolation Method
18
3-3-1 First Order Splines
19
3-3-2 Quadratic Spline
20
3-3-3 Cubic Spline
21
3-3-4 1D Spline Interpolation Function
22
3-3-5 2D Spline Interpolation Function
23
3-4- Nearest – neighbor Interpolation
24
3-4-1 Nearest – neighbor Interpolation Function
25
3-5 Statistical Formulation
25
3-5-1 Error
25
3-5-2 Maximum Error
26
3-5-3 Average
26
IV
Title
Page No.
3-5-4 Standard Deviation
26
3-5-5 Percentage of Error
26
3-6 Digital Image Processing
27
3-6-1 Colored Image
27
3-6-2 Greyscale Image
28
3-6-3 Binary Image
29
3-6-4 Thresholding
29
3-6-4-1 Thresholding Methods
32
3-6-5 Edge Detection
32
3-6-5-1 Edge Detection Techniques(operators or filters)
33
3-6-5-2 Canny Edge Detection
34
3-6-6 Noise Reduction (Denoising)
39
3-6-6-1 Median Filter
39
3-7 Toolpath generation
40 CHAPTER FOUR
EXPERIMENTAL WORK 4-1 System Fabrication
42
4-2 Steps for achieving the work
47
4-2-1 sample
50
4-2-1-1 Step1
50
Title
Page No. V
4-2-1-2 Step2
53
4-2-1-3 Step3
58
4-2-1-4 Step4
61
4-2-1-5 Step5
62
4-2-1-6 Step6
66
4-2-1-7 Step7
68
4-2-1-8 Step8
69
4-2-2 Sample 2
72
4-2-3 Sample 3
84
4-2-4 Sample 4
94
4-2-5 Sample 5
104
4-3 The Implementation
112 CHAPTER (5)
RESULTS AND DISCUSSION 5-1 Introduction
115
5-2 Statistical comparison
115
5-3 Error Distribution
120
5-4 The Matching between the Surfaces Constructed using Origin Data and the Samples Reconstructed using Least Square Fitting Data for four Samples 128 5-5 Discussion
132
Title
Page No. VI
CHAPTER (6) CONCLUSIONS AND SUGGESTION FOR FUTURE WORKS 6-1 Conclusions
137
6-2 Suggestion for Future Works
138
6-3 Contribution of this dissertation
139
REFERENCES
140
APPENDICES
146
VII
List of figures Figure
Page
Figure (1-1) Sculptured surface [15]..……………………………...……………….3 Figure (3-1) Natural spline [33]…....…..…………………………..………….......18 Figure (3-2) First order spline [33]…..……..………………………………...…...20 Figure (3-3) Second order spline [33]………..………………..…..…….….……..20 Figure (3-4) Cubic spline [33]..………………………………...………..…….….22 Figure (3-5) Curve of sine function by using 1st Spline interpolation function......23 Figure (3-6) Nearest-neighbor Method……………………………….………..….25 Figure ( 3-7 ) A color image……………………………………………………....28 Figure(3-8 ) The left is greyscale image and the right is a different shades ……...29 Figure (3- 9) The left is Binary image and right is zero – one……………………29 Figure (3-10) Thresholding [47]………………………………….……...………..31 Figure (3-11) Right is Original Image and left is Edge Image[50].…………….....34 Figure (3-12) 3x3 Masks compute gradient magnitude and angle[61]…...….……36 Figure (3-13 ) Original image[61] …………………………………….………..…37 Figure (3-14 ) Sobel edge detected[61]…....…………………………………...….37 Figure (3-15 ) Roberts edge detected[61]………………………...………...……..38 Figure (3-16) Prewitt edge detected[61]……...……………………..……………38 Figure (3-17) Canny edge detected[61]….…………………………...…….……..39 Figure (3-18) Spatial filter of 3x3 mask[48]………………………….………...…40 Figure (3-19) Zig-Zag Toolpath of free-form surfaces [67]………...…………….41 Figure (4-1) The system used in the work……….………………………………..43 Figure (4-2) Section of the system that has used in the work……………...……...43 X
Figure (4-3) The main parts of the system……………………………………..….44 Figure (4-4) Par2…………………………………………………………………..45 Figure (4-5) Parts 7 and 8………………………………………….………….…...45 Figure (4-6) The collection of parts 5, 7 and 8….……………..……………….…45 Figure (4-7) Assembly of parts 2,3,4 5, 7 and8..……..……………………………46 Figure (4-8) System section drawing……………………………………………...46 Figure (4-9) The system connected online to the PC ……………………………..47 Figure (4-10) A flowchart of the processing steps …………….………..……......49 Figure (4-11) Real image of sample1…..……………………..…………….…….50 Figure (4-12) Captured images 1to 4…………..…………………………..……...50 Figure (4-13) Captured images 5 to 8…...…………………..…………...………..51 Figure (4-14) Captured images 9 to 12……………………………………………51 Figure (4-15) Captured images 13 to 16..…………..……………………….…….51 Figure (4-16) Captured images 17 to 20………………………………….….……51 Figure (4-17) Captured images 21 to 24…………..………………………………52 Figure (4-18) Captured images 25 to 28……………………..……………………52 Figure (4-19) Captured images 29 to 32…………………………………………..52 Figure (4-20) Captured image 33………………………………………………….52 Figure (4-21) Block diagram shows steps for edge detection captured images…...53 Figure (4-22 ) Color image represents one slice of sample1………………………54 Figure ( 4-23) Gray scale and pixel region of the slice in figure (4-22)…………..54 Figure (4-24) Binary image( left) and the histogram after Thresholding (right)….55 Figure (4-25 ) Edge detected and pixel region…………………………………….55 XI
Figure (4-26) Edge detected for captured images 1to 4…………………………...56 Figure (4-27 Edge detected for captured images 5to 8……………………………56 Figure (4-28) Edge detected for captured images 9to 12………………………....56 Figure (4-29) Edge detected for captured images 13to 16………………………..56 Figure (4-30) Edge detected for captured images 17to 20………………………...57 Figure (4-31) Edge detected for captured images 21to 24………………………..57 Figure (4-32) Edge detected for captured images 25to 28………………………..57 Figure (4-33) Edge detected for captured images 29to 32………………………..57 Figure (4-34) Edge detected for captured image 33……………………………….58 Figure (4-35) Edge reconstructed 1to 4…………………………………..………..58 Figure (4-36) Edge reconstructed 5to 8……………………………...…………….58 Figure (4-37) Edge reconstructed 8to 12…………………………………………..59 Figure (4-38) Edge reconstructed 13to 16………………………………..………..59 Figure (4-39) Edge reconstructed 17to 20………………………………...……….59 Figure (4-40) Edge reconstructed 21to 24………………………………..………..59 Figure (4-41) Edge reconstructed 25to 28………………………………...……….60 Figure (4-42) Edge reconstructed 29to 32…………………………………..……..60 Figure (4-43) Edge reconstructed 33……………………………….……………..60 Figure (4-44) Slices have been assembled and elevated …………………………61 Figure (4-45) Surface divided in order to apply interpolating and fitting methods.62 Figure (4-46) Block diagram shows steps for enhancing and reconstructing 3D surface……………………………………………………………………………..63 Figure (4-47) Sample1 as captured data………………………………………….64 Figure (4-48) Sample1 after spline interpolating…………………………….…...64 XII
Figure (4-49) Sample1 after nearest interpolating………………………………..65 Figure (4-50) Sample1 after least square fitting……………………………….….65 Figure (4-51) Sample1-origin data- meshed ……………………………….…….66 Figure (4-52) Sample1-as captured data- meshed…………………………….….66 Figure (4-53) Sample1-spline interpolating data- meshed………………………..67 Figure (4-54) Sample1-nearest interpolating data- meshed……………………....67 Figure (4-55) Sample1-nearest interpolating data- meshed………………………68 Figure (4-56) Toolpath for sample1 after least square fitting……………………..68 Figure (4-57) Sample1-original data- surface model …………………………….69 Figure (4-58) Sample1-as captured data- surface model ………………………...70 Figure (4-59) Sample1-spline interpolating data- surface model ………………...70 Figure (4-60) Sample1-nearest interpolating data- surface model ……………….71 Figure (4-61) Sample1-least square data- surface model ………………………...71 Figure (4-62) Real image of Sample2………………………………………...…...72 Figure (4-63) Captured images and their edges…….……………………………..72 Figure (4-64) Captured images and their edges…………...………………………73 Figure (4-65) Captured images and their edges…………………………………...73 Figure (4-66) Captured images and their edges…………………………………...74 Figure (4-67) Captured images and their edges…………………………………...74 Figure (4-68) Slices of sample2 have been collected and elevated …………..…75 Figure (4-69) Sample2 as captured data .………………………………………….76 Figure (4-70) Sample2 after spline interpolating…………………………………76 Figure (4-71) Sample1 after nearest interpolating………………………...……...77 XIII
Figure (4-72) Sample1 after least square fitting……………………………..……77 Figure (4-73) Sample2-original data- meshed ……………………………….…..78 Figure (4-74) Sample2-as captured data- meshed……………………………...…78 Figure (4-75) Sample2-spline interpolating data- meshed………………………..79 Figure (4-76) Sample2-nearest interpolating data- meshed………………….…...79 Figure (4-77) Sample2-least square fitting data- meshed…...……………………80 Figure (4-78) Toolpath for sample2 after least square fitting………………….….80 Figure (4-79) Sample2-original data- surface model …………………………….81 Figure (4-80) Ssample2-as captured data- surface model ………………………..81 Figure (4-81) Sample2-spline interpolating data- surface model ………………..82 Figure (4-82) Sample2-nearest interpolating data- surface model ………………82 Figure (4-83) Sample2-least squared data- surface model ………………………83 Figure (4-84) Actual model of Sample3 …………………..……………………..84 Figure (4-85) Random captured images for sample3…………………………..….84 Figure (4-86) Edges detected for images in figure (4-85)……………………...…84 Figure (4-87) Slices of sample3 have been assembled and elevated …………….85 Figure (4-88) Sample3 as captured data ………………………………………...86 Figure (4-89) Sample3 after spline interpolating………………………..……….86 Figure (4-90) Sample3 after nearest interpolating……………………………..…87 Figure (4-91) Sample3 after least square fitting…………………………………..87 Figure (4-92) Sample3-original data- meshed………………………………...….88 Figure (4-93) Sample3-as captured data- meshed….…………………………….88 Figure (4-94) Sample3-spline interpolating data- meshed………………..……....89 XIV
Figure (4-95) Sample3-nearest interpolating data- meshed………………………89 Figure (4-96) Sample3-least square fitting- meshed…………….…………..……90 Figure (4-97) Toolpath for sample3 after least square fitting ……………………90 Figure (6-98) Sample3-original data- surface model …………………………….91 Figure (4-99) Sample3-captured data- surface model ……..…………..…………91 Figure (4-100) Sample3-spline interpolating data- surface model ……..………..92 Figure (4-101) Sample3-nearest interpolating data- surface model …..…………92 Figure (4-102) Sample3-least square fitting data- surface model ………..………93 Figure (4-103) Real image of Sample4…………………………………….……...94 Figure (4-104) Random captured images for sample4…………………………….94 Figure (4-105) Edges detected for images in figure (6-85)……………………….94 Figure (4-106) Slices of sample4 have been assembled and elevated …………..95 Figure (4-107) Sample4 as captured data…………………………………….…..96 Figure (4-108) Sample4 after spline interpolating………………………….….…96 Figure (4-109) Sample4 after nearest interpolating………………………………97 Figure (4-110) Sample4 after least square fitting…………………………………97 Figure (4-111) Sample4-original data- meshed…………………………………..98 Figure (4-112) Sample4-as captured data- meshed………………………………98 Figure (4-113) Sample4-spline interpolating data- meshed…………………..…..99 Figure (4-114) Sample4-nearest interpolating data- meshed……………………..99 Figure (4-115) Sample34-least square fitting data- meshed………………….....100 Figure (4-116) Toolpath for sample4 after least square fitting………………......100 Figure (4-117) Sample4-original data- surface model …..……………………...101 XV
Figure (4-118) Sample4-captured data- surface model ………..………………..101 Figure (4-119) Sample4-spline interpolating data- surface model ……………...102 Figure (4-120) Sample4-nearest interpolating data- surface model …………….102 Figure (4-121) Sample4-least square fitting data- surface model ………………103 Figure (4-122) Real shape of sample 5…………………………………………..104 Figure (4-123) Captured images 1 to 4 and their edges………………………….104 Figure (4-124) Captured images 5 to 8 and their edges………………………….105 Figure (4-125) Captured images 9 to 12 and their edges………………………...105 Figure (4-126) Captured images 13 to 15 and their edges……………………….106 Figure (4-127) Slices of sample 5 assembled and elevated……………………..106 Figure (4-128) Sample 5 as captured data………………………………………..107 Figure (4-129) Sample 5 after spline interpolating………………………………107 Figure (4-130) Sample5 after nearest interpolating……………………………...108 Figure (4-131) Sample5 after least square fitting………………………………..108 Figure (4-132) Sample5-spline interpolating data- meshed…………………….109 Figure (4-133) Sample5-nearest interpolating data- meshed……………………109 Figure (4-134) Sample5 least square fitting data- meshed………………………110 Figure (4-135) Zig-Zag Toolpath for sample5 after least square fitting…………110 Figure (4-136) Sample 5as captured data – surface model ……………………...111 Figure (4-137) Sample 5- spline interpolating data – surface model ……………111 Figure (4-138) Sample 5- nearest interpolating data – surface model …………..111 Figure (4-139) Sample 5-least square fitting data - surface model …………….111 Figure (4-140 ) The row material as a block ………………………………..…...112 XVI
Figure (4-141 ) Toolpath to machining sample3 …………………...……………113 Figure (4-142 ) One stage of machining sample3 …………………..…………...113 Figure (4-143 ) Advanced stage of machining sample3 …..…………………….114 Figure (4-144 ) Sample3 has been fully machined …..………………………….114 Figure (5-1) Error distribution sample 1 when captured…….…………………...120 Figure (5-2) Error distribution for sample 1 when spline interpolated………….120 Figure (5-3) Error distribution for sample 1 when nearest interpolated………...121 Figure (5-4) Error distribution for sample 1 when least square fitted…………...121 Figure (5-5) Error distribution for sample 2 when captured……………………..122 Figure (5-6) Error distribution for sample 2 when spline interpolated………….122 Figure (5-7) Error distribution for sample 2 when nearest interpolated………...123 Figure (5-8) Error distribution for sample 2 when least square fitted……….…...123 Figure (5-9) Error distribution for sample 3 when captured…………………….124 Figure (5-10) Error distribution for sample 3 when spline interpolated……..….124 Figure (5-11) Error distribution for sample 3 when nearest interpolated……….125 Figure (5-12) Error distribution for sample 3 when least square fitted…….…….125 Figure (5-13) Error distribution sample 4 when captured……...……………...…126 Figure (5-14) Error distribution for Sample 4 when spline interpolated………...126 Figure (5-15) Error distribution for sample 4 when nearest interpolated…….….127 Figure (5-16) Error distribution for sample 4 when least square fitted…….…….127 Figure (5-17) Matching between the surfaces of original data and least square data of sample1………………………………………………………………………..128 Figure (5-18) Matching between the surfaces with original data and least square data of sample2 ………………………………………………………………….129 XVII
Figure (5-19) Matching between the surfaces with original data and least square data of sample3…………………………………………………………………. 130 Figure (5-20) Matching between the surfaces with original data and least square data of sample4…………………………………………………………………...131
XVIII
List of tables Table (5-1) Statistics comparison of different analysis for sample1…….............116 Table (5-2) Statistics comparison of different analysis for sample2…………….117 Table (5-3) Statistics comparison of different analysis for sample3…………….118 Table (5-4) Statistics comparison of different analysis for sample4…………….119
XIX
List of Symbols and Abbreviations Symbol
Description
Ea
The average
Efi
Absolute difference error
Epercentage
Percentage error
Zfi
The value of origin elevation
Zi
The value of captured, spline, nearest and least square elevation
1
st
First degree
2
nd
Second degree
3
rd
Third degree
2D
Two dimension
3D
Tree dimension
CAD
Computer Aided Design
CAM
Computer Aided Manufacture
CC
Cutter Contact
CFD
Computational Fluid Dynamics
CNC
Computer Numerical Control
CPU
Center Processing Unit
CT
Computed Tomography
DoGs
Difference of Gaussians
XX
GHz
Gega Hertiz
GPU
Graphics Processing Unit
interp1
1 Interpolation
interp2
2
interp3
3 Interpolation
LoG
Laplacian of Gaussian
MATLAB
Matrix Laboratory
Max Zi
Maximum value of captured, spline, nearest and least square elevation
Max Zo
Maximum value of the origin elevation of the samples
MRI
Magnetic Resonance Imaging
NC
Numerical control
NURBS
Non-Uniform Rational B-Splines
PDE
Partial Differential Equation
RAM
Random Access Memory
RGB
Red, Green, Blue
S.D
Standard deviation
SCT
Spiral computed tomography
st
nd
Interpolation
rd
XXI
Chapter (1)
General Introduction 1-1 Overview 3D reconstruction techniques were widely used in diversified applications, such as vision based navigation systems, vision based industrial inspection systems, and so on. Currently, there are many proposed approaches, ranging from acquiring structured pattern projection based reconstruction method, stereo vision, and structure from motion, etc.[1]. Three-dimensional model acquisition remains a very active research area in computer vision. One of the key questions is how to reconstruct accurate models from a set of
2D images. 3D reconstruction is a process of
regenerating 3D information of an object using its 2D images. It has been an important part of computer vision studies. Computer vision deals with automatic extraction of various kinds of information from images. The main of aim of machine vision is to let machine visualize the world as humans do and let them interact it[2]. 3D surface reconstruction refers to procedure of 3D reconstruction based on 2D image contours using technology of computer graphs and image processing [3].
1-2 Surfaces Intuitive notation of surface is a continuous set of points approximating a plane in the neighborhood of each of the points. Another conception of a surface is the locus of a point moving line or curve. Surface
1
generation on a CAD system usually requires wire frame entities: lines, curves, points [4]. There are different types of surfaces, start from plane surface that is generated by interpolation of 4 end points [4], a surface of revolution which is a surface in Euclidean space created by rotating a curve (the generatrix) around a straight line in its plane (the axis) [5], a parametric surface is defined explicitly by the range of values of a parametric function [6]. A large complex surface can be defined by composite collection of simpler patches, while preserving certain level of continuity like Hermite surface [7], a Bezier surface requires a mesh of 4X4 control points [8], and B-spline surfaces are popular and successful methods in Computer Aided Geometric Design. Beyond creating these types of surfaces the modification of existing ones are also of great importance[9].
1-2-1 Sculptured Surface Sculptured surfaces are common in a wide variety of products such as automobiles, household appliances, water craft and air craft components[10]. These surfaces are characterized by their smooth shape and may include features such as valleys, mounts, or blends to fulfill requirements such as an optimized airflow or a desirable ergonomic shape[11]. These surfaces are used in the design of car bodies, ship hulls, molds, dies and other application where smooth surfaces. Sculptured surfaces are generally produced in three stages(From the standpoint of manufacturing): roughing, finishing and benchwork (Benchwork consisting of grinding and polishing is used to remove these scallops). Roughing cuts are used to remove most of the material from a workpiece while leaving the part slightly oversized. Finish machining of a sculptured surface removes as much as 2
possible of the remaining material from the roughed out workpiece and attempts to machine the part to its final dimensions [10]. A sculptured surface is the surface that can only be represented as the image of a sufficiently regular mapping of a set of points in a domain into a 3D Space. A sculptured surface can be represented by a set of curves that connect the design points of the surface[12]. Two main approaches are commonly used for obtaining the curved surfaces: the first approach exploits the parametric curves representation, while the second one uses contouring planes (frequently geometrically equally-spaced parallel planes) to intersect the surface for obtaining a curved surface [13]. Sculptured surfaces in CAD/CAM are typically defined by a vectorvalued parametric equation of the form r(u,w)=[x(u,w),y(u,w),z(u,w)] for 0↑u, w↑1. This definition can be used to generate sequence of points and normals that are used to define numerically controlled (NC) tool cutter paths to drive an NC milling machine.[14]. Figure (1-1)represents an example of sculptured surfaces.
Figure (1-1) Sculptured surface [15]
3
In sculptured surface modeling, complex contours are represented as a network of patches, each expressed in terms of known points, vectors, and curves. The contour of each patch conforms to that of the small surface section it is intended to represent.[15].
1-3 The aim of dissertation The main objective of the dissertation is developing and improving methods for reconstruction of sculptured surfaces based on image slicing technique using single digital camera to be applicable in CAD/CAM and Reverse engineering applications. To accomplish the aim, it will be necessary to: Propose, design and implement mechanical system used single digital camera with possibility connection to a PC suitable for capturing images . Preparing appropriate Matlab program to processing digital images. Reconstruct and assemble the slices using Auto CAD. Develop an appropriate program to reconstruct three dimensional surfaces. Develop an appropriate Matlab programs for enhancing the reconstructed surfaces using mathematic methods. Develop a program for generating the toolpath of reconstructed surfaces. Machining samples of the reconstructed surfaces.
4
1-4 Matlab package It is a tool and custom software development environment for computational tasks, It performs a lot of functions including mathematical ones and is internally-driven and that facilitate the solution of various kinds of formulae. MATLAB programming language also helps to write functions and special programs. In addition it has many other features. The Matlab version 7.12.0.635 (R 2011a) has been used in this work for the following tasks: Computation. Algorithm development. Data acquisition. Modeling, simulation. Data analysis, exploration, and visualization.
1-5 Dissertation organization In order to achieve the aim of this work, the current dissertation is divided into six chapters. A concise presentation of scientific implications of researchers on 3D surface reconstruction is introduced in chapter two. Chapter
three
deal
with
theoretical
consideration
includes,
mathematics relationships that used to deal with interpolating and fitting methods in case of enhancing final surfaces, and also Image processing was briefed in this chapter. Chapter four describes the proposed mechanical system which used for image capturing for the models that have been tested in this work, in addition the experimental work and the procedures of image processing,
5
reconstruction of 3D surfaces and implementing of one sample is included in this chapter. The results has been discussed in chapter five. Finally the conclusions and suggestion for future works presented in chapter six.
6
Chapter (2)
Literature Review 2-1 Introduction Here the published literature that are related to the scope of this work has been reviewed . The literature survey is presented in ascending order to the publishing date of each article. M. Celenk (1995) [16] has demonstrated that 3D object surfaces can be recognized from their cross-sectional images. He has described a method for recognizing 3D objects from their serial cross sections. Object regions of interest in cross-sectional binary images of successive slices are aligned with those of the models. Cross sectional differences between the object and the models are measured in the direction of the gradient of the cross section boundary. C. L. Bajaj… et al. (1996) [17] presented a powerful algorithm for reconstructing surfaces from a set of planar contours or image slices. The theoretical derivation of the correspondence and tiling rules allowed their algorithm, given any input data, to generate a unique topology satisfying the desired surface criteria. This unified approach has led to reconstructed surfaces which correspond well with the surface of the physical objects that were imaged. J. Chai… et al. (1998) [18] They have described a new method to interpolate sub-contours and reconstruct terrain surface from a contour map. The advantage of this method is that the surfaces generated are more accurate and smooth. The main contribution of their work is that a new type 7
of PDE surface (Partial Differential Equation) , the gradient controlled PDE surface, is set up to express rational terrain surfaces. Differing from conventional PDE surfaces, it can satisfy both the height and gradient boundary conditions. The effectiveness and usefulness of the proposed method have been confirmed in several examples. They show the method is suitable to capture complex terrain shapes. J. HE… et al. (2001) [19] they proposed a method of reconstructing triangular surfaces from given medical slices. The method tries to solve the problem by using contours interactively extracted from the slices. After that the problem changes to connecting the contours into a triangular mesh. For the purpose, the algorithm has to deal with two difficult problems: contour correspondence and branching. The correspondence problem is solved here by exploiting information both of the extracted contours and the original slices. Then, they triangulate the contours by a piecewise-linear interpolation scheme with ability to handle degenerate portions in the mesh. The most significant advantage in integrating the grey-level slice information is that we can match the contours more correctly to find reliable contour correspondence for modeling complex anatomical structures. C. Gold & M. Dakowicz (2002) [20] derived surfaces from contours and proposed a general approach to generate a skeleton points ignoring skeleton points between contours and assign elevations to these skeleton points and eliminate flat triangles , estimate slope information at each data point and perform weighted- average interpolation (Sibson interpolation). N. Pfeifer (2002) [21] reconstructed a surfaces over a triangulation the so-called subdivision. In this approach the given triangulation is refined in steps, and in each step new vertices and edges are inserted into the triangulation. This is performed in a way that the smoothness of the 8
triangulation is increased in each level, the angles between adjacent triangles converge towards 180. Y. Li… et al. (2005) [22]
suggested propose a new method for 3D
reconstruction based on optimal image contour mapping. They also proposed a novel data structure to represent the corresponding 3D objects. In their approach, all object contours in the same slice as well as adjacent slices are automatically segmented and combined in a hierarchical tree data structure. This data structure allows fast 3D object retrieval and 3D component analysis. J. Marker…et al. (2006) [23] have presented a volumetric approach to reconstructing smooth surface from a sparse set of parallel contours. It creates a volume dataset by interpolating 2D filtered distance fields. The zero isosurface embedded in the computed volume provides the desired result. L. Gan & Z. Qu (2006) [24] have focused on reconstructing the 3D shape of the object with a series of cross-sectional images. The 3-D reconstruction method is derived from industrial computed tomography images. The cross section image resulting from the 2-D convolution projection reconstruction is the original image for 3-D reconstruction. K. S. Shreedhara & S. P. Indira (2006) [25] used their own editing software to reconstruct a 3D realistic object on the screen for regular shapes. The initial contours are constructed from two methods (Bezier technique and edge detection technique). The contours are stacked one above the other in an regular order and are joined using mesh. M. Liang Wang…et al (2007) [1] proposed a simple method inspired by the layer scanning correlative methods for performing 3D model acquisition based on scanning the cross-section of the object surface. The 9
idea is to use multiple 2D projections of parallel 3D curves to recover the level curves of the object. The test object is placed in a cylindrical water container and the level curves are generated by raising the water level. The depth of each level curve can be acquired by calculating the geometry of the recorded image. W. Ki Jeong (2008) [26] proposed probabilistic framework with a user-assisted backward-tracking approach enables the user to robustly extract 3D elongated objects. Due to creating a 3D vector field V from sliceto-slice correspondences, one may consider to explicitly trace the initial curve in the 3D vector field to the forward direction. P.P. Sun et al. (2008) [27] have introduces a 3D surface reconstruction method from serial parallel slices of cross-sections. Depends on the contour shape they get a series of regular points. The density of the points reflects the change of the contour shape, and then corresponding knot vectors and control points are generated from these points. Define a flexible space to compute the knot vector of the surface. For the multi-connectivity problem, using image division translates it into some single-connectivity problem. The centroid of each slice and each closed contour is generated to get the relationship of the contours S. Prakoonwit, & R. Benjamin (2009) [28] have described a method for reconstructing frontier points, contour generator networks and surfaces of 3D objects from a small number, e.g. 10, of photographic images taken at equally distributed projection directions with full prior knowledge of camera configurations. The method has been tested and has shown that it is capable of optimally reconstructing networks of contour generators and surfaces to represent 3D objects.
10
W.J. Wang… et al. (2009) [29] They studied Methods for 3D bone reconstruction
and
an appropriate method based on bone SCT(Spiral
Computed Tomography) image is gained. SCT image is preprocessed, binary treated, and contour abstracted by SCT image abstraction system. This method can achieve processing outline data collection, reversing bone 3D surface and reconstructing 3D bionic model. The example in their work has shown how the 3D model of a bone is obtained through plane reconstructing successfully. Image processing algorithm is concluded, thereby providing necessary information for 3D reconstruction of bone SCT image. B. S. Deng et al. (2011) [30] They proposed a novel three dimensional reconstruction framework from wide baseline images was proposed based on point and line features. After detecting and matching features, the relations between discrete images are computed and refined according to multi-view geometric constraints, and both structure of the scene and motion of cameras are retrieved, where they employ a procedure of Euclidean reconstruction based on approximate camera internal parameters and buddle adjustments. Based on retrieved motion and correspondence of line features, a 3D line reconstruction scheme was put forward to assist us in gaining regular structure and topology of the scene.
2-2 Summary Through reviewing a published works related to the three-dimensional surfaces, most researchers photography cameras are programmed to capture images in different viewpoints and use mathematical methods to determine the elevation
of each section and use some to deal with images from 11
magnetic resonance imaging (MRI) or laser scanning and computed tomography (CT) scan ; They relied on triangulation method to recreate three dimensional objects and some adopted the method of triangulation Besides others used their own special adopted programs. Furthermore the captured data have been processed using spline, nearest interpolating and least square fitting that are enhance the reconstructed surfaces. This work relies primarily on capturing two dimension Image in the shape of parallel slices of the model and differs from the work of other researchers in the field of reconstructing a three dimension surfaces from two dimensional images where here focus on dealing with objects of sculptured surfaces in the field of production engineering and the focus is on the CAD through image processing of two dimension captured images taken with a single digital camera with one viewpoint. A proposed mechanical system has prepared , implemented and used in this work.
12
Chapter (3)
Theoretical Considerations of 3D Model Reconstruction 3-1 Introduction The data has been obtained from converting the two dimensional images that have been captured from the models under discussion to digital data in the form of multiple points with coordinates of (x , y). During the reconstructing the three dimension body differences were found in the data compared to data of original shapes , especially towards of the edges of the model with subdivision of the surface to the curves. It was thought of using three mathematical methods in interpolating and fitting to improve the shapes of the three dimensional bodies that have been recreated. Three general methods which have been investigated as a core of the developed procedure are: Least-Squares fitting. Spline interpolation. Nearest interpolation.
3-2 Least Squares Fitting Method The method of least squares is an alternative to interpolation for fitting a function to a set of points. Unlike interpolation, it does not require the fitted function to intersect each point. [31] The method of least squares is probably best known for its use in statistical regression, but it is used in many contexts unrelated to statistics.
13
The term least squares describes a frequently used approach to solving over determined or inexactly specified systems of equations in an approximate sense. Instead of solving the equations exactly, minimizing the sum of the squares of the residuals has been taken into consideration. 3-2-1 Surface Fitting by Using 2D Least Squares Multiple regression estimates the dependent variables which may be affected by more than one control parameter (independent variables) or there may be more than one control parameter being changed at the same time[32]. 3-2-2 Linear 2D Least Squares The assumption that the dependent z component of the data is functionally independent of the x and y component is represented as: -
z a0 a1xi a2 yi ............................................................................. (3-1) For a given data set ( x1 , y1 , z1 ), ( x2 , y 2 , z 2 ),.........., ( xn , y n , z n ) , where n 3 , the best fitting curve f (x) has the least square error, i.e., n
n
i 1
i 1
S [ zi f ( xi , yi )]2 [ zi (a0 a1xi a2 yi )]2 min ......................... (3-2)
where a 0 , a1 and, a 2 are unknown coefficients while all xi, y i , and z i are given. To obtain the least squares error, the unknown coefficients a 0 , a1 and, a 2 must yield zero first derivatives. n S 2 [ zi (a0 a1 xi a2 yi )] 0 a0 i 1 n S 2 xi [ zi (a0 a1 xi a2 yi )] 0 a1 i 1 n S 2 yi [ zi (a0 a1 xi a2 yi )] 0 a2 i 1
Expanding the above equations, we have 14
.......................................... (3-3)
n
n
n
i 1
i 1
n
zi a0 1 a1 xi a2 yi i 1
i 1
n
n
n
n
i 1
i 1
i 1
i 1
n
n
n
n
i 1
i 1
i 1
i 1
xi zi a0 xi a1 xi2 a2 xi yi
.................................................. (3-4)
yi zi a0 yi a1 xi yi a2 yi2 The unknown coefficients a0 , a1, and a2 can be obtained by solving the above equations. In matrix form equation (3-4) can be represented as: [33] n xi i 1 yi n
xi x i2 xi yi
y i a0 n z i x i y i a1 x i z i i 1 y z y i2 a2 i i
.......................................... (3-5)
The general polynomial equation of nth degree can be represented as: m
n
j 0
i 1
z i S (x i , y i ) a j p i (x i , y i ) ………...............................…....…. (3-6)
where aj = a0, a1,…., am are coefficients to be determined by adjustment using 2D Least squares method and pi = p1 , p2, …., pn are approximately chosen functions of x and y called the based function. At each data point, the difference between surface elevations and Z f x i , y i gives the residual R x i , y i n
n
m
n
i 1
i 1
j 0
i 1
R (x i , y i ) Z f (x i , y i ) S (x i , y i ) Z f (x i , y i ) a j pi (x i , y i ) …… (3-7) In general form the equation of 2D least square for surface fitting can be represented as: - [34] R i2 Z fi 2 Z i2 = minimum….........................................................…... (3-8)
15
3-2-3 Quadratic 2D Least Squares The two independent variables x and y with one dependent variable z in the quadratic relationship case can be represented as. Z f x , y a0 a1x i a2 y i a3x i 2 a4x i y i a5 y i 2 ……………...…....… (3-9)
The best values of ( a0 , a1 , a2 , a3 , a4 , a5 ) are determined by setting the sum of the square of the residual error n
sr (z i a0 a1x i a2 y i a3x i2 a4 x i y i a5 y i2 )2 …….……………..….. (3-10) i 1
Differentiating the above equation according to each variable ( a0 , a1 , a2 , a3 , a4 , a5 ), then dividing the equation by -2 and right side is made equal to zero for min error, rearrange the equations (3-10) to gives:n dsr 2 (z i a0 a1x i a2 y i a3x i2 a4 x i y i a5 y i2 ) 0 da0 i 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
n dsr 2 y i2 (z i a0 a1x i a2 y i a3x i2 a4x i y i a5 y i2 ) 0 ........................ (3-11) da5 i 1
The quadratic 2D least squares equation for surface fitting can be compacted in a matrix form as: - [35] n xi n y i2 i 1 x i x y i 2i y i
xi x i2 xi yi x i3 x i2 y i x i y i2
yi xi yi y i2 x i2 y i x i y i2 y i3
x i2 x i3 x i2 y i x i4 x i3 y i x i2 y i2
xi yi x i2 y i x i y i2 x i3 y i x i2 y i2 x i y i3
y i2 a0 zi x i y i2 a1 xizi y i3 a2 n y i z i 2 x i2 y i2 a3 i 1 x i z i xi yizi x i y i3 a4 2 y i4 a5 yi zi
16
………(3-12)
3-2-4 Cubic 2D Least Squares In the same sequence two independent variables x , y and one dependent variable z in cubic relationship case can be represented as:- [36] z f x , y a0 a1x i a2 y i a3x i 2 a4x i y i a5 y i 2 a6x i 2 y i a7x i y i 2 a8x i 3 a9 y i 3 …....…
(3-13)
The best values of ( a0 , a1 , a2 , a3 , a4 , a5 , a6 , a7 , a8 , a9 ) are determined by the setting the sum of the square of the residual error n
sr (z i a0 a1x i a2 y i a3x i2 a4 x i y i a5 y i2 a6x i2 y i a7 x i y i2 a8x i3 a9 y i3 )2 …..... (3-14) i 1
n dsr 2 (z i a0 a1x i a2 y i a3x i2 a4 x i y i a5 y i2 a6x i2 y i a7 x i y i2 a8x i3 a9 y i3 ) 0 da0 i 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
n dsr 2 y i3 (z i a0 a1x i a2 y i a3x i2 a4 x i y i a5 y i2 a6x i2 y i a7 x i y i2 a8x i3 a9 y i3 ) 0 …….... da9 i 1
(3-15)
The cubic 2D least squares equation for surface fitting can be compacted in a matrix form as: - [35] n xi yi 2 xi n x y i 2i i 1 y i x 2y i i x i y i2 3 xi y3 i
xi
yi
x i2
x i2 xi yi x i2
xi yi y i2 x i2 y i
x i3 x i2 y i x i4
x i2 y i x i y i2 x i3 y i x i2 y i2 x i4 x i y i2 y i3 x i2 y i2 x i y i3 x i3 y i x i3 y i x i2 y i3 x i4 y i x i3 y i2 x i5
x i2 y i x i y i2 xi yi y i3 x i3 y i x i2 y i2 x i2 y i2 x i y i3
x i3 y i x i2 y i2 x i4 y i x i3 y i2
x i2 y i2 x i y i3 x i y i3 y i4 x i3 y i2 x i2 y i3 x i2 y i3 x i y i4
x i4 x i3 y i x i y i3 y i4
x i5 x i4 y i x i2 y i3 x i5 y i x i4 y i2 x i6 x i2 y i3 x i y i4 y i5 x i2 y i4 x i y i5 x i3 y i3
xi yi
y i2
x i2 y i
x i3 y i2 x i2 y i3 x i4 y i2 x i3 y i3
x i y i2
x i2 y i3 x i y i4 x i3 y i3 x i2 y i4
x i3
x i4 y i x i3 y i2 x i5 y i x i y i5
y i3 a0 zi 3 x i y i a1 xizi yizi y i4 a2 2 x i2 y i3 a3 xi zi 4 n x y z x i y i a4 i 2 i i 5 y i a5 i 1 y i z i x 2y z x i2 y i4 a6 i 2i i 4 2 x i y i a7 xi yi zi x 3z 3 3 x i y i a8 i i 6 y 3z a y i 9 i i
... (3-16)
The formulation of 2D Least Squares will be manipulated and invested to generate this adopted algorithm to reconstruct 3D surface in this work.
17
3-3 Spline Interpolation Method Spline methods involve interpolation between the given data points by attaching together several low order polynomials. Various continuity requirements can be specified at the data points to impose various degrees of smoothness of the resulting curve. The order of continuity becomes important when a complex curve is modeled by several curve segments pieced together end to end[37]. The concept of the spline originated from the drafting technique of using a thin, flexible strip (called a spline) to draw smooth curves through a set of points. The drafting technique involves using a spline to draw smooth curves through a series of points. At the end points, the spline straightens out. This is called a natural spline. An alternative approach is to apply lower-order polynomial in piecewise fashion to sub sets of data point. Such connecting polynomials are called spline function .For example third-order curves employed to connect each pair of data points are called cubic spline .These function can be constructed so that the connections between adjacent cubic equation are visually smooth. [33]
Figure (3-1) Natural spline [33]
18
In a spline interpolation, each interval between data points, represents the graph with a simple function. The simplest spline is obtained by connecting the data with lines. Since linear spline is the most simple function of all, linear interpolation is the simplest form of spline. The next simplest function is quadratic. If a quadratic function has put on each interval then it should be able to make the graph a lot smoother. Using cubic functions or 4th degree functions should be smoother still. There is an almost universal consensus that cubic is the optimal degree for splines[38] 3-3-1 First Order Splines For n data points (i = 1, 2, . . . , n),there are n − 1 intervals. Each interval i has its own spline function, si (x). For linear splines, each function is merely the straight line connecting the two points at each end of the interval, which is formulated as: s i x = ai + bi x - x i ………………………………………………..…(3-17)
where ai is the intercept, which is defined as: ai = f i …………………………………………………………………(3-18 )
and bi is the slope of the straight line connecting the points: bi =
f i +1 - f i …………………………………………………………. (3-19 ) x i +1 – x i
where fi is shorthand for f (xi ). Substituting Eqs. (3-17) and (3-18) into Eq. (3-19) gives s i (x)
f i + f i + 1- f i (x - x i ) ……………………………………………(3-20) x i +1 - x i
19
Figure (3-2) First order spline [33]
3-3-2 Quadratic Spline A quadratic spline is a differentiable piecewise quadratic function. Many problems in numerical analysis and optimization literature can be reformulated as unconstrained minimizations of quadratic splines.[39] To ensure that the nth derivatives are continuous at the knots, a spline of at least n + 1 order must be used. Third-order polynomials or cubic splines that ensure continuous first and second derivatives are most frequently used in practice. Although third and higher derivatives can be discontinuous when using cubic splines, they usually cannot be detected visually and consequently are ignored.
Figure (3-3) Second order spline [33]
20
3-3-3 Cubic Spline The most common piecewise-polynomial approximation uses cubic polynomials between each successive pair of nodes and is called cubic spline interpolation. A general cubic polynomial involves four constants, so there is sufficient flexibility in the cubic spline procedure to ensure that the interpolant is not only continuously differentiable on the interval, but also has a continuous second derivative.[40] Cubic spline interpolation is a useful technique to interpolate between known data points due to its stable and smooth characteristics.[41] While data of a particular size presents many options for the order of spline functions, cubic splines are preferred because they provide the simplest representation that exhibits the desired appearance of smoothness. In general, the ith spline function for a cubic spline can be written as: Si(x)= ai+ bi (x - xi ) + ci( x - xi )2 +( di x - xi )3………………………...(3-21) For n data points, there are n-1 intervals and thus 4(n-1) unknowns to evaluate to solve all the spline function coefficients. One condition requires that the spline function go through the first and last point of the interval, yielding 2(n-1) equations of the form: Si(x)= fi → ai = fi ………………………….…………………………..(3-22) Si(xi+1)=fi+1 →Si(xi+1)=ai +bi (xi+1- xi )+ci (xi+1- xi )2+di (xi+1- xi )3=fi+1....(3-23) Another requires that the first derivative be continuous at each interior point, yielding n-2 equations of the form: S’i (xi+1)= S’i+1(xi+1)→bi+2ci(xi+1- xi )+3d(xi+1- xi )2=bi+1 ……………….(3-24) A third requires that the second derivative be continuous at each interior point, yielding n-2 equations of the form: S"i (xi+1)= S "i+1(xi+1)→ 2ci+6d(xi+1- xi ) =2ci+1 ………………………..(3-25)
21
Figure (3-4) Cubic spline [33]
3-3-4 1D Spline Interpolation Function The format of function that is used to represent 1D Spline interpolation method is taken from MATLAB software in the form of: yi = interp1 (x,y,xi,'spline') where yi is the interpolated value, x is a vector with the horizontal coordinate of the input data points (independent variable), y is a vector with the vertical coordinate of the input data points (dependent variable) xi is the horizontal coordinate of the interpolation point (independent variable), and Spline is the method of interpolation. [42] By applying 1D Spline interpolation function to sine equation y sin(x ) , the following figure (3-8) is obtained by using MATLAB software.
22
Cubic spline 11 control points 1 0.8 0.6 0.4
Y-axis
0.2 0 -0.2 -0.4 -0.6 data spline
-0.8 -1
0
1
2
3
4
5 X-axis
6
7
8
9
10
Figure (3-5) Curve of sine function by using 1st Spline interpolation function. .
3-3-5 2D Spline Interpolation Function The standard MATLAB environment contains a function, spline, that works with irregularly spaced data. The MATLAB function interp1 performs interpolation, using various methods including linear and cubic interpolation for 1st interpolation. On the other hand the MATLAB function interp2 performs 2nd interpolation. [43] In MATLAB the general form of 2D interpolation function ZI
=
interp2(X,Y,Z,XI,YI,spline)
returns
matrix
ZI
containing elements corresponding to the elements of XI and YI and determined by interpolation within the two-dimensional function specified by matrices X, Y, and Z. X and Y must be monotonic, and have the same format. Matrices X and Y specify the points at which the data Z is given. MATLAB has built-in function for two- and three dimensional piecewise interpolation: 23
vi = interp3(x, y, z, v, xi, yi, zi, ‘method’), „method‟
is a string containing the desired method: „nearest‟, „linear‟,
„spline‟, or „cubic‟. For 2D interpolation, the inputs must either be vectors or same size matrices and for 3rd interpolation, the inputs must either be vectors or same size 3D arrays.[33]
3-4 Nearest – neighbor Interpolation The simplest interpolation method is the nearest-neighbor method which has a runway shape in space domain as shown in Fig (3-6). It can be expressed as:[44] 1 𝑥 < 0.5 ……………………………………..(3-26) 0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒 Nearest-neighbor interpolation (also known as proximal interpolation
ℎ0 𝑥 =
or, in some contexts, point sampling) is a simple method of multivariate interpolation in one or more dimensions. The nearest neighbor interpolations are not as accurate as the interpolations that use the same amount of derivative information. The fact that nearest neighbor interpolation is basically an extrapolation shown from the relatively high error near the center of the interval.[44]
24
Figure (3-6) Nearest-neighbor Method
3-4-1 Nearest – neighbor Interpolation Function The following MATLAB commands are used to produce a nearestneighbor interpolation: yi = interp1 (x,y,xi,'nearest')for1D interpolation ZI = interp2(X,Y,Z,XI,YI,n) for2D interpolation
3-5 Statistical Formulation 3-5-1 Error : For each point of a surface the error function Ef is defined as the absolute difference between the origin elevation of the function Zfi and values of the captured models, spline interpolation, nearest interpolation and least square fitting methods Zi for each model, calculated as the absolute values of the error function Efi and for the three methods of interpolating and fitting. The error function is defined as: [39] 𝐸𝑓𝑖 = 𝑍𝑓𝑖 − 𝑍𝑖
--------------------------------------------- (3-27) 25
3-5-2 Maximum Error (Efi ): represents the maximum value of function E f i. 3-5-3 Average:- This measure of location called the mean can be used with both discrete and continuous data. The average is equal to sum of all the observed value divided by the total number of observation, the value of average will usually not be equal to any one of the individual observed values, the average formula is represented as:- [45] n
E a (E fi / n )
----------------------------- (3-28)
i 1
where Ea represents the average and n is the number of surface points 3-5-4 Standard Deviation: The standard deviation is the most common measure of variability, measuring the spread of the data set and the relationship of the mean to the rest of the data. If the data points are close to the mean, indicating that the responses are fairly uniform, then the standard deviation will be small. Conversely, if many data points are far from the mean, indicating that there is a wide variance in the responses, then the standard deviation will be large. If all the data values are equal, then the standard deviation will be zero. The standard deviation is calculated using the following formula. n
(S.D) σ
=
E
fi
Ea
2
i 1
---------------- (3-29)
n 1
where σ is a standard deviation, E f i is the absolute value of error function, E a =average, and n is to the number of surface points. 3-5-5 Percentage of Error: here the value of error percentage can be determined to explain more information about the surface statistical analysis and the formula to determine these values and this formula can be represented as:.[40] 26
n
E percentage = i 1
𝑍𝑓𝑖 −𝑍𝑖 𝑍𝑖
× 100 / 𝑛 − 1 =
n
i 1
𝑍𝑓𝑖 −𝑍𝑖 𝑍𝑖
× 100 / 𝑛 − 1 ---(3-30)
3-6 Digital Image Processing Digital image processing is an ever expanding and dynamic area with applications reaching out into our everyday life such as medicine, space exploration, surveillance, authentication, automated industry inspection and many more areas. Applications such as these involve different processes like image enhancement and object detection.[46] An image may be defined as a two-dimensional function, f(x , y), where x and y are spatial coordinates, and the amplitude of (f) at any pair of coordinates (x , y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term used most widely to denote the elements of a digital image.[47]. 3-6-1 Colored Image A color image is made up of pixels each of which holds three numbers corresponding to the red, green, and blue levels of the image at a particular location. Red, green, and blue (sometimes referred to as RGB) are the primary colors for mixing light, these so-called additive primary colors are different from the subtractive primary colors used for mixing paints (cyan, magenta, and yellow). Any color can be created by mixing the correct 27
amounts of red, green, and blue light. Assuming 256 levels for each primary, each color pixel can be stored in three bytes (24 bits) of memory. This corresponds to roughly 16.7 million different possible colors, Figure ( 3-7 ) shows a color image. [48]
Figure ( 3-7 ) A color image
3-6-2 Greyscale Image A greyscale image is a two dimensional array of values indicating the brightness at each point. The brightness values are generally stored as a value between ( 0) (black) and (255) (white). Values in between are different shades of gray[49]. A greyscale image measures light intensity only. Each pixel is a scalar proportional to the brightness. The minimum brightness is called black, and the maximum brightness is called white[50]. Figure (3-8 ) shows gray scale image and its gradient shades.
28
Figure(3-8 ) The left is greyscale image and the right is a different shades of gray
3-6-3 Binary Image A binary image is a two dimensional array of binary pixels. If the value is 0, the pixel is black. If the value is 1, the pixel is white [50] figure (3-9) shows a binary image and its zero – one .
Figure (3- 9) Right is zero – one and the left is Binary image
3-6-4 Thresholding Thresholding is a non-linear operation that converts a gray-scale image into a binary image where the two levels are assigned to pixels that are below or above the specified threshold value. 29
In many vision applications, it is useful to be able to separate out the regions of the image corresponding to objects in which we are interested, from the regions of the image that corresponds to background. Thresholding often provides an easy and convenient way to perform this segmentation on the basis of the different intensities or colors in the foreground and background regions of an image. In addition, it is often useful to be able to see what areas of an image consists of pixels whose values lie within a specified range, or band of intensities (or colors). Thresholding can be used for this as well[51]. Gradient image may contain true edges, but some edges may be caused by noise or color variations due to uneven surfaces. The simplest way to discern between these would be to use a threshold, so that only edges stronger than a certain value would be preserved[52]. A process of creating a black-and-white image out of a grayscale image consists of setting exactly those pixels to white whose value is above a given threshold, setting the other pixels to black. Thresholding is a process of converting a grayscale input image to a bi-level image by using an optimal threshold. The purpose of Thresholding is to extract those pixels from some image which represent an object (either text or other line image data such as graphs, maps). Though the information is binary the pixels represent a range of intensities. Thus the objective of binarization is to mark pixels that belong to true foreground regions with a single intensity and background regions with different intensities[53].
30
Thresholding is a technique used for segmentation, which separates an image into two meaningful regions: foreground and background, through a selected threshold value T. If the image is a grey image, T is a positive real number in the range of [0,….,K], where, K ϵ R≥0 . So, Thresholding may be viewed as an operation that involves tests against a value T[54] . Figure (3-10 ) shows an image and the effect of threshold value.
` Figure (3-10) Thresholding [47] 31
3-6-4-1 Thresholding Methods We categorize the Thresholding methods in six groups according to the information they are exploiting. These categories are: 1. histogram shape-based methods, where, for example, the peaks, valleys and curvatures of the smoothed histogram are analyzed. 2. clustering-based methods, where the gray-level samples are clustered in two parts as background and foreground object or alternately are modeled as a mixture of two Gaussians. 3. entropy-based methods result in algorithms that use the entropy of the foreground and background regions, the cross-entropy between the original and binarized image, etc. 4. object attribute-based methods search a measure of similarity between the gray-level and the binarized images, such as fuzzy shape similarity, edge coincidence, etc. 5. the spatial methods use higher-order probability distribution and/or correlation between pixels 6. local methods adapt the threshold value on each pixel to the local image characteristics[55]. 3-6-5 Edge Detection Generally, an edge is defined as the boundary pixels that connect two separate regions with changing image amplitude attributes such as different constant luminance and tristimulus values in an image[56, 57, 58]. Edges often occur at image locations representing object boundaries; edge detection is extensively used in image segmentation when we want to divide the image into areas corresponding to different objects. Representing an
32
image by its edges has the further advantage that the amount of data is reduced significantly while retaining most of the image information.[46] Classical methods of edge detection involve convolving the image with an operator (a 2-D filter), which is constructed to be sensitive to large gradients in the image while returning values of zero in uniform regions.[59] There are an extremely large number of edge detection operators available, each designed to be sensitive to certain types of edges. Variables involved in the selection of an edge detection operator include edge orientation, noise environment and edge structure. The geometry of the operator determines a characteristic direction in which it is most sensitive to edges. Operators can be optimized to look for horizontal, vertical, or diagonal edges. Edge detection is difficult in noisy images, since both the noise and the edges contain high frequency content. Attempts to reduce the noise result in blurred and distorted edges.[59]
3-6-5-1 Edge Detection Techniques(operators or filters) The goal is to find edges in the image. An edge is a place in the image with a strong intensity contrast. This significantly reduces the amount of data (goes from millions of pixels to only hundreds of edge pixels). This filtering of non-important information while at the same time preserving important structural properties is generally a very helpful first step in the analysis of an image. Edges are often used in segmentation because they generally occur at natural object boundaries. This is generally performed by creating a mask (kernel or filter) which outputs the desired information (usually something about the gradient). The constant in front of the mask is simply to normalize 33
the value to [0 , 1][50]. There are many common types of operators that one can chose to detect the boundary of images like Sobel operator, Robert‟s Cross Operator , Prewitt operator , and Canny edge detector .Figure (3-10) shows origin image and its edge.
Figure (3-11) Right is Original Image and left is Edge Image[50]
3-6-5-2 Canny Edge Detection The Canny Edge Detector is one of the most commonly used image processing tools, detecting edges in a very robust manner. It is a multi-step process, which can be implemented on the Graphics Processing Unit (GPU) as a sequence of filters. The current standard in edge detection that is widely used around the world is the Canny edge detector. It is treated edge detection as a signal processing problem and aimed to design the `optimal' edge detector. He formally specified an objective function to be optimized and used this to design the operator [50]. The objective function was designed to achieve the following optimization constraints:
34
1. Maximise the signal to noise ratio to give good detection. This favours the marking of true positives. 2. Achieve good localisation to accurately mark edges. 3. Minimise the number of responses to a single edge. This favours the identification in the simplest form of edge detection technique the procedure was as follows: 1. Convolve the image with a two dimensional Gaussian filter to smooth it. 2. Differentiate the image in two orthogonal directions of true negatives, that is, non-edges are not marked. 3. Calculate the gradient amplitude and direction.. 4. Perform non-maximal suppression. Any gradient value that is not a local peak is set to zero. The gradient direction is used in this process. 5. Threshold these edges to eliminate `insignificant' edges[60].
2
4
5
4
2
4
9
12
9
4
1 B = 5 12 159 4 9
15
12
5
12
9
4
2
5
4
2
4
------------------------ (3-31)
5 × 5 Filter uses to noise reduction [61].
35
-1
0
+1
+1 +1 +1
-1
0
+1
0
0
0
-1
0
+1
-1
-1
-1
Gx
Gy
Figure (3-12) 3x3 Masks compute gradient magnitude and angle[61]
The following equations are used to compute gradient magnitude and angle [61]: |G|=√Gx2+Gy2 ……………………………………………………..…..(3-32) |G| = |Gx| + |Gy|………………………….……………………..….….(3-33) |Gy| θ =arctan ( ----------------- ) ……………………………………….….(3-34) |Gx| The edge detection techniques were implemented using and tested with an image (Bharathiar University). The objective is to produce a clean edge map by extracting the principal edge features of the image. The original image and the image obtained by using different edge detection techniques are given in figures (3-13), (3-14), (3-15), (3-16), and figure (3-17)[61].
36
Figure (3-13 ) Original image[61]
Figure (3-14 ) Sobel edge detected[61]
37
Figure (3-15 ) Roberts edge detected[61]
Figure (3-16) Prewitt edge detected[61]
38
Figure (3-17) Canny edge detected[61]
3-6-6 Noise Reduction (Denoising) Image de-noising is an vital image processing task i.e. as a process itself as well as a component in other processes. There are many ways to denoise an image or a set of data and methods exists. The important property of a good image denoising model is that it should completely remove noise as far as possible as well as preserve edges. Traditionally, there are two types of models i.e. linear model and non-liner model. The benefits of linear noise removing models is the speed and the limitations of the linear models is, the models are not able to preserve edges of the images in an efficient manner i.e. the edges, which are recognized as discontinuities in the image, are smeared out. On the other hand, non-linear models can handle edges in a much better way than linear models[62]. 3-6-6-1 Median Filter Both the Prewitt and Sobel edge detectors start to break down if there is “noise” present in the image. A median filter which is non- linear filter can be used to effectively filter pixels that are much brighter or darker than their neighbors. As such, a median filter is a good method to remove noise 39
from an image [62]. Median filtering seems almost tailor-made for removal of salt and pepper noise. Recall that the median of a set is the middle value when they are sorted. If there are an even number of values, the median is the mean of the middle two. A median filter is an example of a non-linear spatial filter using a 3x3 mask, the output value is the median of the values in the mask. For example [48]: 50
65
52
63 255 58 61
60
50 52 57 68 60 61 63 65 255
60
57 ]Figure (3-18) Spatial filter of 3x3 mask[48]
3-7 Toolpath generation The conventional tool path generation methods can be roughly categorized into iso-parametric,
iso-planar and iso-scallop. The iso-
parametric method is the earliest of these three methods, [63]. A design surface is a three-dimensional parametric
surface expressed by two
variables, u and v. This method generates a cutter contact (CC) path on the design surface by choosing an initial parametric variable u or v and keeping it constant while increasing the value of the other parametric variable that describes the surface. The generated curves are known as iso-curves. The advantage of using iso-curves is that these curves do not intersect each other and recreate the contour of the parametric surface. The next parametric variable would be chosen so that the next iso parametric curve would not have points with a scallop height higher than the one specified. This method is mathematically convenient and it ensures that the entire surface is covered
40
by machining passes. However, depending on the surface, the tool paths generated by this method may be very dense in some zones, [64]. As an improvement of the iso-parametric method, their method proposes the use of iso-curves as CC tool paths with discontinuities in regions where the iso-curves would be too dense. Later methods would also use the iso-parametric method along with triangular meshes [65] or adaptive grids [66] to obtain larger step-sizes between adjacent tool paths. Figure (1) shows zig-zag toolpath generated through the use of parametric curves and surfaces [67].
Figure (3-19) Zig-Zag Toolpath of free-form surfaces [67].
41
`
Chapter (4)
Experimental Work 4-1 System Fabrication Depth represents one of the most important problems to produce three-dimensional model of 2D images , so developed system has been designed and implemented
to obtain different depths depending on
elevations of images captured with single a webcam camera of 3 mega pixel resolution has been used and connected online to the PC with Operating system of Windows 7 Home Premium 64-bit, Processer Intel ( R) , Core (TM) i5 2410M CPU @ 2.3GHz and memory of 4096 MB RAM . The system shown in Fig. (4-1) consists of a transparent cube plastic container with black vinaigrette base installed on a sleeve to offset the liquid during ascent and descents with a 2mm pitch screw for raising and lowering the model. The system was installed on a wooden base with stand around the container to fix camera that is connected directly to the PC to capture multiple images from single view perpendicular to the model that is placed inside the container filled with a liquid with suitable density and the liquid is compensated at each offset. There is a ruler with 0.5mm accuracy mounted on one face of the transparence container to indicate the elevation of the captured image. An image of the 3D shape system is shown in Fig. (4-1) specifically the cube is designed as a fluid container used to place the object so that the object elevation can be increased or decreased stably by turning the screw to the left or to the right respectively. 42
`
As illustrated in Figures (4-2) and (4-3) part 1 is a stand to install the camera up perpendicular in appropriate height; It is made of aluminum section, part 2 is a container in box shape of four aspects and base with an upper open face, part 3 is a black vinaigrette base installed on a sleeve on which the model is placed for imaging, the hollows are made to offset the liquid during the movement of the base.
Figure (4-1) The system used in the work
Stand Stand Camera
Camera
Container Container
Sleeve
Vinaigrette Vinaigrette basebase Oil seal Oil seal
BaseBa Base
Hand Hand screw
Wooden base Wooden Bas
Screw
Figure (4-2) Section of the system that has used in the work 43
`
2
1
3 5 4
6 7
9 8 2
8 2
Figure (4-3) The main parts of the system
Part 4 is a secondary base inserted below the main base helping installing oil seal between it and the main base of the box for the purpose of preventing fluid leakage. Part 5 is a sleeve serrated from inside fixed to the vinaigrette base while part 6 is a an oil seal preventing any liquid leak; Part 7 for raising and lowering the model is, part 8 a hand used in the management of the screw and part 9 is a wooden base on which the system is mounted. Figure 4-4 shows the container fixed on the wooden base and figure 4-5 the handle screw while figure 4-6 shows assembling of the handle screw with the sleeve and figure 4-7 illustrates assemble of the container with the mechanical handle connected to the main base while figure 4-8 is a detail engineering drawing of the system and figure 4-9 is an image of the system connected on line to the PC. 44
`
Figure (4-4) Part 2
Figure (4-5) Parts 7 and 8
Figure (4-6) Assembly of parts 5, 7 and 8
45
`
Figure (4-7) Assembly of parts2,3,4 5, 7 and 8
Figure (4-8) System section drawing
46
`
Figure (4-9)The system connected online to the PC
4-2 Steps for achieving the work Here steps for achieving the work will shows in logical sequence to carry out the work where it is started by slicing the model into series of parallel sections depending on the required accuracy of slices and the maximum elevation of the model and each section has been captured as an image. Each image has been processed using MATLAB package to find its edge. The processing is completed through converting the image to a grayscale image using Canny edge detector and then converting it into binary image (1 and 0) where number one represents an
edge and
coordinates have then taken( x, y ) for each number one and for all the 47
`
edges of each slice then export the data of all slices to AutoCAD software to reconstruct them and elevate and collect all slices then returns them to MATLAB package to perform subsequent operations and reshaping in mesh and solid types using three mathematical methods to choose the best one that has the minimum deviation to the originalal model for machining in the last step of the developed procedure and machine it using CNC milling machine after creating the appropriate toolpath. The following steps describe fully procedure to achieve the sample1 to the end, starting with capturing slices of the model and processing them then reconstructing the edges detected and collecting theme after giving elevation to each slice and reconstructing the three dimensional surface. Also the effect of applied interpolating and fitting methods is shown on the improvement of the reconstructed three dimensional surface. All these steps are repeated for sample 2, sample 3, sample4 and sample 5. Figure (4-10) Flowchart of the developed procedure.
48
`
Start
Physical object covered with liquid
Raising the moveable base one step upward
Standalone procedure Image Processing
Image capturing No
Is the object depth recovered ?
Converting to gray scale Thresholding Converting to binary Filtering(median) Edge detection
Depth measuring
Object reconstruction
Yes
using
End
AutoCAD
Figure (4-10) A flowchart of the processing steps
49
`
Samples 1 to 4 are different sculptured surfaces that their originalal data (appendices B, C, D, E) available they have been chosen to be tested in order to make a comparison between them and the data came out using capturing
system and the data obtained using interpolating and fitting
methods.
4-2-1 sample 1
Figure (4-11) A real model of sample1
4-2-1-1 Step1 Step one shows all slices of the model obtained from the developed system for capturing number of images which are representing the actual elevation of the sample1figure (4-11) is a real model of sample 1.
Figure (4-12) Captured images 1to 4
50
`
Figure (4-13) Captured images 5to 8
Figure (4-14) Captured images 9 to 12
Figure (4-15) Captured images 13 to 16
Figure (4-16) Captured images 17 to 20
51
`
Figure (4-17) Captured images 21 to 24
Figure (4-18) Captured images 25 to 28
Figure (4-19) Captured images 29 to 32
Figure (4-20) Captured image 33
52
`
4-2-1-2 Step2 This step shows the processing of all images that have been captured as slices in step1 to find their edges using Matlab package using appropriate program. The block diagram shown in figure (4-21) shows the real steps of processing of captured images to find out the contour of the slices which will be used to reconstruct 3D surface of the sample 1. Read the image
Convert color image to gray scale
Denoising using median filter
Thresholding
Convert to binary image
Edge detection
Save image edge as a slice
Figure (4-21) Block diagram shows steps for edge detection captured images 53
`
The images shown in figures ( 4-22) to figure (4-25) describe how to get the edge of each slice from captured image through gray scale and binary image.
Figure (4-22 ) Color image represents one slice of sample1
Figure ( 4-23) Gray scale and pixel region of the slice shown in figure (4-22) 54
`
Figure (4-24) Binary image( left) and the histogram (right)
Figure (4-25 ) Edge detected and pixel region
55
`
Figure (4-26) to figure (4-34) shows all edges detected using Matlab for slices that have captured using mechanical developed system.
Figure (4-26) Edge detected for captured images 1 to 4
Figure (4-27 Edge detected for captured images 5 to 8
Figure (4-28) Edge detected for captured images 9 to 12
Figure (4-29) Edge detected for captured images 13 to 16
56
`
Figure (4-30) Edge detected for captured images 17 to 20
Figure (4-31) Edge detected for captured images 21 to 24
Figure (4-32) Edge detected for captured images 25 to 28
Figure (4-33) Edge detected for captured images 29 to 32
57
`
Figure (4-34) Edge detected for captured image 33
4-2-1-3 Step3 In this step all edges of slices that have been detected in the previous step have been reconstructed using Auto CAD software as shown in figures (4-35) to ( 4-43).
Figure (4-35) Edge reconstructed 1 to 4
Figure (4-36) Edge reconstructed 5 to 8
58
`
Figure (4-37) Edge reconstructed 9 to 12
Figure (4-38) Edge reconstructed 13 to 16
Figure (4-39) Edge reconstructed 17 to 20
Figure (4-40) Edge reconstructed 21 to 24
59
`
Figure (4-41) Edge reconstructed 25 to 28
Figure (4-42) Edge reconstructed 29 to 32
Figure (4-43) Edge reconstructed 33
60
`
4-2-1-4 Step4 This step illustrates the assembly of all slices that have been reconstructed in the step3 after giving the elevation (which has been measured directly from the mechanical system during capturing images) for each slice as shown in figure (4-44).
Figure (4-44) Slices have been assembled and elevated
61
`
4-2-1-5 Step5 This step illustrates the application of spline interpolating, nearest interpolating and least square fitting on the data obtained from captured model for sample1 after dividing the planer surface of the model to 50 x 50 mesh grid , figure (4-45) gives an idea about the way that the surface of sample 1 has divided to mesh grid. This step has repeated for other 3 surfaces.
Figure (4-45 ) Surface divided in order to apply interpolating and fitting methods
62
`
The block diagram shown in figure (4-46) shows the real steps of enhancing the surface using interpolating and fitting methods and then reconstructing the surface as a mesh grid and surface model.
Read the data points
Interpolating and fitting the data
Spline interpolation
Least square fitting
Nearest interpolation
Reconstructing the surface
Surface model
Mesh grid
Figure (4-46) block diagram shows steps for reconstructing 3D surface
Figures (4-47) , (4-48), (4-49) and (4-50) describe the relationship between x coordinates that come out from figure (4-45) and the elevation z for the data of the surface as captured and the data obtained using spline interpolation, nearest interpolation and lest square fitting.
63
`
Figure (4-47) Sample1 as captured data
Figure (4-48) Sample1 after spline interpolating
64
`
Figure (4-49) Sample1 after nearest interpolating
Figure (4-50) Sample1 after least square fitting
65
`
4-2-1-6 Step6 This step shows the comparison of the original data which has meshed with the captured data that has been drawn as mesh grid and with that of the data was obtained using spline interpolation, nearest interpolation and least square fitting after mesh gridding them figures (4-51) to (4-55).
Figure (4-51) Sample1- original data- meshed
Figure (4-52) Sample1-as captured data- meshed
66
`
Figure (4-53) Sample1-spline interpolating data- meshed
Figure (4-54) Sample1-nearest interpolating data- meshed
67
`
Figure (4-55) Sample1-least square fitting data- meshed
4-2-1-7 Step7 This step shows zig-zag toolpath that can be used to execute the model1 which has been reconstructed from the data coming from least square fitting . Parametric variable y keeping constant while increasing the value of the x parametric. Figure (4-56) shows the toolpath of sample 1 .
Figure (4-56) Toolpath for sample2 after least square fitting 68
`
4-2-1-8 Step8 This step shows the surface model of the sample1which starts from the model that came out from original data to the model which has been plotted using the data coming from least square fitting method applied to the data of captured model passing the captured solid model , spline interpolated solid model and the solid model which came out using nearest interpolated.
Figure (4-57) Sample1-original data- surface model
69
`
Figure (4-58) Sample1-as captured data- surface model
Figure (4-59) Sample1-spline interpolating data- surface model
70
`
Figure (4-60) Sample1-nearest interpolating data- surface model
Figure (4-61) Sample1-least square data- surface model
71
`
4-2-2 Sample 2 Real model and all slices of sample 2 with their edges shown in figures (6-62) to figure ( 4-67) and all the steps involved in the implementation of first model have been repeated here.
Figure (4-62) Real model of Sample2
Figure (4-63) Captured images 1 to 4 and their edges
72
`
Figure (4-64) Captured images 5 to 8 and their edges
Figure (4-65) Captured images 9 to 12 and their edges
73
`
Figure (6-49 Slices of sample2 collected and elevated
Figure (4-66) Captured images 13 to 16 and their edges
Figure (4-67) Captured images 17 to 20 and their edges
74
`
Figure (4-68) shows all slices of sample 2 that have reconstructed using Auto CAD and assembled after giving the elevation for each slice which has been measured directly from the mechanical system during capturing images.
Figure (4-68) Slices have been assembled and elevated
Figures (4-69) to (4-72) show the relationship between x coordinates and the elevation z for data of captured model and the data after applying spline interpolating, nearest interpolating and least square fitting. Figures (4-73) to (4-77) illustrates the mesh grid of the data of captured and the data obtained using interpolating and fitting methods and figure (4-78) represents the toolpath generated for the data of least square fitting. Figures (4-79) to (4-83) are the surface models as a comparison vision between the original model with that of captured model and that for the models reconstructed 75
`
using the data came out from spline interpolating, nearest interpolating and least square fitting.
Figure (4-69) Sample2 as captured data
Figure (4-70) Sample2 after spline interpolating
76
`
Figure (4-71) Sample1 after nearest interpolating
Figure (4-72) Sample1 after least square fitting
77
`
Figure (4-73) Sample2-original data- meshed
Figure (4-74) Sample2-as captured data- meshed
78
`
Figure (4-75) Sample2-spline interpolating data- meshed
Figure (4-76) Sample2-nearest interpolating data- meshed
79
`
Figure (4-77) Sample2-least square fitting data- meshed
Figure (4-78) Toolpath for sample2 after least square fitting
80
`
Figure (4-79) Sample2-original data- surface model
Figure (4-80) Sample2-as captured data- surface model
81
`
Figure (4-81) Sample2-spline interpolating data- surface model
Figure (4-82) Sample2-nearest interpolating data- surface model
82
`
Figure (4-83) Sample2-least squared data- surface model
83
`
4-2-3 Sample 3 Figure (4-84) is a real model of sample 3 and figures (4-85) and (4-86) are some random images have been chosen here and all the steps involved in the implementation of first model have been repeated here.
Figure (4-84) Areal model of Sample3
Figure (4-85) Random captured images for sample3
Figure (4-86) Edges detected for images shown in figure (4-85) 84
`
Figure (4-87) is the assembly of all slices of sample 3 after elevating each slice which has been measured directly from the mechanical system during capturing images.
Figure (4-87) Slices of sample3 assembled and elevated
Figures (4-88) to (4-91) describe the relationship between x coordinates and the elevation z for data of captured model and the data after applying spline interpolating, nearest interpolating and least square fitting. Figures (4-92) to (4-96) illustrates the mesh grid of the data of captured and the data obtained using interpolating and fitting methods and figure (4-97) represents the toolpath generated for the data of least square fitting. Figures (4-98) to (4-102) are the surface models as a comparison vision between the original model with that of captured model and that for the models reconstructed using the data came out from spline interpolating, nearest interpolating and least square fitting. 85
`
Figure (4-88) Sample3 as captured data
Figure (4-89) Sample3 after spline interpolating
86
`
Figure (4-90) Sample3 after nearest interpolating
Figure (4-91) Sample3 after least square fitting
87
`
Figure (4-92) Sample3-original data- meshed
Figure (4-93) Sample3-as captured data- meshed
88
`
Figure (4-94) Sample3-spline interpolating data- meshed
Figure (4-95) Sample3-nearest interpolating data- meshed
89
`
Figure (4-96) Sample3-least square fitting- meshed
Figure (4-97) Zig-Zag toolpath for sample3 after least square fitting
90
`
Figure (4-98) Sample3-original data- surface model
Figure (4-99) Sample3-captured data- surface model
91
`
Figure (4-100) Sample3-spline interpolating data- surface model
Figure (4-101) Sample3-nearest interpolating data- surface model
92
`
Figure (4-102) Sample3-least square fitting data- surface model
93
`
4-2-4 Sample 4 Figures (4-103) to (105) are real model and some random images have been chosen here for sample 4 which is the most complex among all four samples that have been tested in this work, and all the steps involved in the implementation of first model have been repeated .
Figure (4-103) Areal model of Sample4
Figure (4-104) Random captured images for sample4
Figure (4-105) Edges detected for images shown in figure (4-104)
94
`
Figure (4-106) is the assembly of all slices of sample 4 after giving the elevating of each slice which has been measured directly from the mechanical system during capturing images.
Figure (4-106) Slices of sample4 assembled and elevated
Figures (4-107) to (4-110) describe the relationship between x coordinates and the elevation z for data of captured model and the data after applying spline interpolating, nearest interpolating and least square fitting. Figures (4-111) to (4-115) illustrates the mesh grid of the data of captured and the data obtained using interpolating and fitting methods and figure
(4-116) represents the toolpath generated for the data of least
square fitting. Figures (4-117) to (4-121) are the surface models as a comparison vision between the original model with that of captured model
95
`
and that for the models reconstructed using the data came out from spline interpolating, nearest interpolating and least square fitting.
Figure (4-107) Sample4 as captured
Figure (4-108) Sample4 after spline interpolating
96
`
Figure (4-109) Sample4 after nearest interpolating
Figure (4-110) Sample4 after least square fitting
97
`
Figure (4-111) Sample4-original data- meshed
Figure (4-112) Sample4-as captured data- meshed
98
`
Figure (4-113) Sample4-spline interpolating data- meshed
Figure (4-114) Sample4-nearest interpolating data- meshed
99
`
Figure (4-115) Sample4-least square fitting data- meshed
Figure (4-116) Zig-Zag toolpath for sample4 after least square fitting
100
`
Figure (4-117) Sample4-original data- surface model
Figure (4-118) Sample4-as captured data- surface model
101
`
Figure (4-119) Sample4-spline interpolating data- surface
Figure (4-120) Sample4-nearest interpolating data- surface model
102
`
Figure (4-121) Sample4-least square fitting data- surface model
103
`
4-2-5 Sample 5 A Mouse shape shown in figure (4-122) (Object used frequently in our lives, its original data not available ) that has tested and captured as slices,
processed, reconstructed as captured and enhanced using
interpolating and fitting methods and reconstructed as mesh grid and as surface model, figures (4-123) to (4-126) are slices of sample 5 with their edges and figure (4-127) represents the slices that have assembled and elevated .
Figure (4-122) Real shape of sample 5
Figure (4-123) Captured images 1 to 4 and their edges
104
`
Figure (4-124) Captured images 5 to 8 and their edges edges
Figure (4-125) Captured images 9 to 12 and their edges
`
105
`
Figure (4-126) Captured images 13 to 15 and their edges
Figure (4-127) Slices of sample 5 assembled and elevated
Figures (4-128) to (4-131) show the relationship between x coordinates and the elevation z for data of captured model and the data after applying spline interpolating, nearest interpolating and least square fitting. Figures (4-132) and (4-133)
illustrates the mesh grid and the
toolpath generated for the data of least square fitting respectively. Figures (4-134) to (4-137) are the surface models as a comparison vision between captured surface model with that for the models reconstructed using the data
106
`
came out from spline interpolating, nearest interpolating and least square fitting.
Figure (4-128) Sample 5 as captured data
Figure (4-129) Sample 5 after spline interpolating
107
`
Figure (4-130) Sample5 after nearest interpolating
Figure (4-131) Sample5 after least square fitting
108
`
Figure (4-132) Sample5-spline interpolating data- meshed
Figure (4-133) Sample5-nearest interpolating data- meshed
109
`
Figure (4-134) Sample5-least square fitting data- meshed
Figure (4-135) Zig-Zag toolpath for sample5 after least square fitting
110
`
Figure (4-136) Sample 5 -as captured data - surface model
Figure (4-137) Sample 5- spline interpolating data - surface model
Figure (4-138) Sample 5- nearest interpolating data – surface model
Figure (4-139) Sample 5- least square fitting data - surface model 111
`
4-3 The Implementation To
evaluate
the developed procedure based on image slicing
technique has been carried out by machining the obtained data for one sculptured surface (sample 3) using Lab Volt CNC mill system model 56002 in Automation Laboratory in Production Engineering and Metallurgy Dept. The machine steps of the sample 3 are illustrated through Figures (4-140 ) to (4-144).
Figure (4-140) The row material as a block
112
`
Figure (4-141) Zig - Zag Toolpath to machine sample3
Figure (4-142) One stage of machining sample3
113
`
Figure (4-143) Advanced stage of machining sample3
Figure (4-144 ) Sample3 fully machined surface
114
Chapter (5)
Results and Discussion 5-1 Introduction The developed system has been tested and evaluated, spline interpolation, nearest-neighbor interpolation and least square fitting are used to improve the models obtained from capturing the original model . A number of statistics formulas have been used to calculate in the elevation, average, standard deviation and the percentage of error for the four models. Also the deviation distribution of the original surfaces with those have been reconstructed and different methods of interpolating and fitting, have been represented.
5-2 Statistical comparison Tables 5-1 to 5-4 represent the statistical comparison between different methods of interpolating and fitting with captured reconstructed model .The competition occurs between the values of average (Ea), standard deviation (σ) and the percentage (Epercentage) of error for four samples and the comparison based on the original data of the samples that have been used in this work.
115
Table (5-1) Statistical comparison of different analyses for sample 1
Captured
Least square fitted surface
Spline interpolated
surface
Nearest interpolated surface
surface Types of interpolation and fitting Statics analysis
As captured
Spline interpolation
Least square fitting
Nearest interpolation
Average Ea (mm)
7.2137
6.9550
0.6573
6.8070
Standard deviation S.D (σ)
3.4139
3.3760
7.3183e-015
3.3608
Percentage of error% Epercentage
22.2490
21.4510
2.0273
20.9946
116
Table (5-2) Statistical comparison of different analyses for sample 2
Captured surface
Least square fitted surface
Spline interpolated surface
Nearest interpolated surface
Types of interpolation and fitting Statics analysis
As captured
Spline interpolation
Least square fitting
Nearest interpolation
Average Ea (mm)
3.7843
3.1483
0.6658
3.3748
Standard deviation S.D (σ)
1.1701
1.1003
0.2201
1.1243
Percentage of error% Epercentage
10.9313
9.0939
1.9231
9.0939
117
Table (5-3) Statistical comparison of different analyses for sample 3
Captured surface
Least square fitted surface
Spline interpolated surface
Nearest interpolated surface
Types of interpolation and fitting Statics analysis
As captured
Spline interpolation
Least square fitting
Nearest interpolation
Average Ea (mm)
3.7318
0.4913
0.5319
0.5290
Standard deviation S.D (σ)
3.3894
0.6353
0.5393
0.6771
Percentage of error% Epercentage
7.4925
0.9864
1.0678
1.0621
118
Table (5-4) Statistical comparison of different analyses for sample 4
Captured surface
Least square fitted surface
Spline interpolated surface
Nearest interpolated surface
Types of interpolation and fitting
Statics analysis
As captured
Spline interpolation
Least square fitting
Nearest interpolation
Average Ea (mm)
2.4745
2.4745
2.4648
2.4746
Standard deviation S.D (σ)
2.0145
2.0145
2.0353
2.0145
Percentage of error% Epercentage
8.7256
8.7256
8.6913
8.7258
119
5-3 Error Distribution Figures 5-1 to 5-16 show error distribution for all four samples
Figure (5-1) Error distribution for sample 1 when captured
Figure (5-2) Error distribution for sample 1 when spline interpolated
120
Figure (5-3) Error distribution for sample 1 when nearest interpolated
Figure (5-4) Error distribution for sample 1 when least square fitted
121
Figure (5-5) Error distribution sample 2 when captured
Figure (5-6) Error distribution for sample 2 when spline interpolated
122
Figure (5-7) Error distribution for sample 2 when nearest interpolated
Figure (5-8) Error distribution for sample 2 when least square fitted
123
Figure (5-9) Error distribution sample 3 when captured
Figure (5-10) Error distribution for sample 3 when spline interpolated
124
Figure (5-11) Error distribution for sample 3 when nearest interpolated
Figure (5-12) Error distribution for sample 3 when least square fitted
125
Figure (5-13) Error distribution sample 4 when captured
Figure (5-14) Error distribution for sample 4 when spline interpolated
126
Figure (5-15) Error distribution for sample 4 when nearest interpolated
Figure (5-16) Error distribution for sample 4 when least square fitted
127
5-4 The Matching between the Surfaces Constructed using Origin Data and the Samples Reconstructed using Least Square Fitting Data for four Samples. Here the match has done between the surfaces constructed using original data and the surfaces reconstructed using the data carried out from least square fitted data for four samples. The data that came out using least square fitting has chosen because this fitting method gives best results in compare with other two interpolating methods. The matching occurred by drawing the data of origin surface and that of least square method on the same axis which gives a good vision to compare between them.
Solid – Origin data Mesh marked – least square fitted data
Figure (5-17) Matching between the surfaces with original data and least square fitted data of sample1 128
Solid – Origin data Mesh marked – least square fitted data
Figure (5-18) Matching between the surfaces with original data and least square fitted data of sample2
129
Solid – Origin data Mesh marked – least square fitted data
Figure (5-19) Matching between the surface with original data and least square fitted data of sample3
130
Solid – Origin data Mesh marked – least square fitted data
Figure (5-20) Matching between the surfaces with original data and least square fitted data of sample4
131
5-5 Discussion In reconstruction sculptured surface using the adopted procedure the main shortage occur around the surface boundaries and this problem are successfully solved using 2D least square fitting method as it clearly illustrated in in the matching behavior between the reconstructed sculptured surface and the original one By tracking the results in Table (5-1) it can be noted that the average error for the surface reconstructed is the biggest compared with an average error output when using methods to improve the degree of correspondence between the reconstructed surface and the real shape of the sample 1 ;It is noted that the average error resulting from the use of least square fitting is the least . The standard deviation values convergence in the results between those of the object reconstructed with spline and nearest interpolating while the standard deviation is the least when using least square method. The percentage of error range
between (20.9946) for nearest
interpolating and (22.2490) while the percentage of error is (2.0273) when least square fitting method is used for sample 1. For the sample 2 as in the Table (5-2), the average error very close in value, ranging from (3.1483) when using a spline method and to (3.3748) for the reconstructed model and when nearest neighbor interpolating has been used the average error is (0.6658) when least square fitting has been used. The standard deviation values are consistent in its value for the surface, which was reconstructed and those that have been measured for two
132
methods of spline and nearest interpolating while the value of the standard deviation when using least square method is (0.2201) which is the least The percentage of error is between (9.0939) when spline and nearest neighbor interpolating have been used and (10.9313) for the surface which was reconstructed while the percentage of error when using the least square method is (1.9231). Table (5-3) of the sample 3 under the test shows that the average error for the three-dimensional surface, which was reconstructed is the highest (3.7318), while average error of less than the one when the methods of interpolating and fitting have been used to improve the shape of reconstructed surface. The standard deviation is (3.3894) for the surface, when reconstructed at the time it was (0.5393) when using least square fitting method. The percentage of error is (7.4925) for the reconstructed three dimensional surface while the error rate which is close to the correct one when the methods of interpolating and fitting have been used to improve the degree of matching between the surface reconstructed and that of the real shape. Sample 4 which is the most complex among five samples was selected to be tested in our work. It is also noted from the table (5-4), the values of the average error are close between its value for the surface, which was reconstructed and the methods used to improve the shape of the reconstructed surface they ranges between (2.4648) and (2.4745) as well as the case of maximum error in the height between (11.5547) and (12.0511) and also converge values for standard deviation ranging from (2.0145) and
133
(2.0353) and the same situation with regard to the percentage of error is also convergent and oscillating between (8.6913) and (8.7258). The differs between statistics of the mathematics methods that have been used to enhancing the finally shape of reconstructed surfaces are due to the nature of the methods that have tested, the least square fitting methods gives better results because the curve will not passes through all exact points but will do a good fitting among them so it exceeds the abnormal points on the contrary of nearest and spline interpolating which passes through all points of the real curve so it looks close to the captured data . Figures 5-1 to 5-4 show the distribution of error for the sample 1,were reconstructed. It is noted that the distribution of error often takes the form of the surface and is similar except for the distribution of error of surface to which the least square fitting method was applied. It takes the form of a straight line because the percentage of error close to zero . The figures 5-5 to 5-8 represent the distribution of the error on the surface of sample 2 It is noticed that it takes the form of surface except distribution error for surface to which
least square fitting method was
applied which seems softer compared to the surface, which was reconstructed and compared with other methods that have been used to improve the degree of match between the three dimensional model, with the true shape of the sample, the difference is due to the nature of the least square fitting . Figures 5-9 to 5-12 represent the distribution of the error on the surface for sample 3. It is observed that the distribution of error takes the form of the three dimensional surface, which was reconstructed except 134
Figure 5-12, the distribution of error is similar to the real form of the sample the reason is the use of fitting method for enhancing the final surface . Figures 5-13 to 5-16 represent the distribution of the error on the surface of the sample 4. It is observed that the error is approximately distributed like
the surface of three dimensional surface which was
reconstructed. Figure 5-16 represents the distribution of the error on the surface after the application of least square fitting method which takes the shape of surface of the real surface this surface is most complex among four samples that have tested. Figure 5-17 shows degree of matching between the surface of original data for sample1 and which has been reconstructed using the data obtained after applying least square fitting method. Figure 5-18 represents the matching between the original surface for sample2 and which has been reconstructed using the enhanced data resulting using least square fitting method. Figure 5-19 shows the matching between the surface of original data of sample3 and the surface which has been reconstructed using enhanced data obtained using least square fitting method. Figure 7-20 shows the degree of matching of the original surface for sample4 with the reconstructed model using the enhanced data obtained using least square fitting method. All samples 1 to 4 that have been chosen and tested their original data are available there for the comparison have been done between them and the
135
reconstructed data as captured and the data obtained using interpolating and fitting methods. The sample 5 a real object which is a computer mouse model has been reconstructed using the
adopt developed system with the same
procedure of the early reconstructed samples . Figures 4-128 to 4-134 shows the same behavior as for the four samples mentioned before . The surface that have been reconstructed differs from the original surface but by fitting method the output surface has enhanced and it seems very close to the original surfaces as it can be seen in figure 4-139. In order to assess the work, the surface of sample 3 which was reconstructed using the enhanced data resulting from least square fitting method,
has been machined using CNC milling machine as shown in
Figures 4-140 to 4-144.
136
Chapter (6)
Conclusions and Suggestion for Future Works 6-1 Conclusions The present dissertation aims to realize the implementation of a mechanical system to be able of capturing images of an object as slicing in order to reconstruct 3D surface of it. Based on acquired results, the following conclusions can be remarked: 1. Using 3D reconstruction system that implemented in this work, one will obtain directly a good theoretical accuracy of the reconstructed surfaces, while the theoretical results are generally enhanced when the captured data have been fitted and interpolated. 2. The experiments have proved the success of this system designed which has been implemented locally and gives access to threedimensional model using a single camera surpassing the issue of camera calibration. 3. It is found that the general trend of the models that have been reconstructed largely corresponds to the true shapes of the five samples that have been used in the work. 4. It is found that variation and differences between surfaces , which have been reconstructed are located on the edges of the surfaces over areas far from the centers of the samples. 5. It is found that least square fitting method gives better results than Spline interpolating and Nearest interpolating used to improve the
137
degree of matching surfaces, which have been reconstructed with the true shapes. 6. With our adopt developed system
and developed fitting and
interpolating programs, a powerful and flexible tool are available for a good and semi-automatic 3D surface processing can be used in several CAD/CAM applications. 7. The developed system provide an effective solution for the significant challenge in reconstruction 3D models by means of image processing which is the degree of matching of the reconstructed surfaces, where the corresponding
tested surface in this work varies between
(91.3087) for sample 4 and (98.9322) for sample 3.
6-2 Suggestion for Future Works 1. Design a more accurate system by reducing the pitch of the lift screw, as well as addressing the problems of light reflection on the liquid and the model while capturing images. 2. Use other mathematical methods such as Lagrange in the event of disagreement in the match between the model that is recreated and real model. 3. Comparison should be made between the three-dimensional surfaces, which have been reconstructed from two-dimensional images using the system proposed given in this work and three-dimensional surfaces resulting from the integration of images captured using two cameras positioned in two different corners.
138
6-3 Contribution of this dissertation The main contributions of this work are the analysis of the system that has been proposed, established and implemented locally, leads to a good performance regarding of reconstructed 3D sculptured surfaces in comparison to the existing one.
139
References References 1. M.L. Wang…et al. "3D Surface Reconstruction with an Image-Based Slicing Technique", The 33rd Annual Conference of the IEEE Industrial Electronics Society (IECON) Nov. 5-8, Taipei, Taiwan, 2007. 2. K .SusheelKumar…et al. "Generating 3D Model Using 2D Images of an Object", International Journal of Engineering Science and Technology (IJEST), Vol. 3 No. 1 Jan 2011. 3. Bie Wang & Jingue He "Contour reconstruction based on NonClosed Contours", Applied Mechanics and Materials, Vols, 220-223, pp 2313-2318, 2012. 4. R. B. Agarwal "Computer Aided Design in Mechanical Engineering", Chapter 5, Lecture notes, India, 2003. 5. K. Lee "Boolean operations", principle of CAD/CAM/CAE AddisonWesley, 2002. 6. T. Judson "Math 21a. Surfaces Parametric Surfaces", Harvard University, Spring 2008. 7. M. E. Mortenson "Geometric Modeling", Library of Congress Cataloging, Canada, 1997. 8. T. W. Sederberg "Computer Aided Geometric Design", Department of Computer Science Brigham Young University, September 3, 2009. 9. M. Hoffmann & I. Juhasz "Geometric Aspects of Knot Modification of B-spline Surfaces" , Journal for Geometry and Graphics ,Volume 6. No. 2, pp. 141-149, 2002. 10.C. C. L. Wang & K. Tang "Optimal Boundary Triangulations of an Interpolating Ruled Surface", Computing and Information Science in Engineering, Vol. 5, pp. 291-301, 2005. 11.G. H. Kumazawa "Generating Efficient Milling Toolpathes According to A preferred Direction Field", A thesis submitted to Mechanical Engineering university of British Columbia-Vancover, 2012.
140
12.H. Liang "Minimum Error Tool Path Generation Method And An Interpolator Design Technique For Ultra-Precision Multi-Axis CNC Machining", a Ph.D. thesis submitted to The Department of Mechanical Engineering at Concordia University, 1999. 13.A. M. Jabber Al – Enzi "Studying Curve Interpolator for CNC System", M.Sc thesis, Department Of Production Engineering and Metallurgy, University Of Technology, 2008. 14.L. A. Mohammed & G. A. Alkindi "An approach to 3D surface curvature analysis ", published inj. of engineering and technology, UoT, Baghdad Iraq, vol.24, No.7, pp.844-852, 2005. 15.S. John…et al. "Models exploring rules surfaces Review" Jrnl of Mathematics and the Arts 3, ISBN 978-1-899618-87-3, pp. 229-230, 2009. 16.M. Celenk "Three-Dimensional Object Recognition Using CrossSections", 0094-2898/95.000, IEEE, 1995. 17.C. L. Bajaj…et al. "Arbitrary Topology Shape Reconstruction from Planar Cross Sections", Computer Science Dept., Purdue University, West Lafayette, IN 47907, 1996. 18.J. Chai…et al. "Contour Interpolation and Surface Reconstruction of Smooth Terrain Models", Hiroshima Institute of Technology, IEEE, 1998. 19.J. HE…et al. "Reconstruction of surfsces from medical slices using a multi – scale strategy", center for Information Science , Peking Uneversity, Beijing, P.R.China 100871, 2001. 20.C. Gold & M. Dakowicz "Terrain Modeling Based on Contours and Slopes", Department of Land Surveying and Geo-Informatics –Hong Kong Polytechnic University, 2002. 21.N. Pfeifer "3D Terrain Models on the Basis of a Triangulation", TECHNISCHE UNIVERSITÄT WIEN-2002. 22.Y. Li…et al. "3D Reconstruction Using Image Contour Data Structure", Engineering in Medicine and Biology 27th Ann Conference Shanghai, China, September 1-4, 2005.
141
23.J. Marker…et al. "Contour-Based Surface Reconstruction using Implicit Curve Fitting, and Distance Field Filtering and Interpolation", Volume Graphics, T. Möller, R. Machiraju, T. Ertl, M. Chen (Editors), 2006. 24.L. Gan, & Z. Qu "Research on 3-D Reconstruction With a Series of Cross-Sectional Images Proceedings", First International Conference on Innovative Computing, Information and Control, 2006. 25.K. S. Shreedhara & S. P. Indira, "Construction of 3-D Objects Using 2-D Cross Sectional Data and Curves", IEEE, 2006. 26.W.Ki Jeong "Interactive Three -Dimensional Image Analysis and Visualization Using Graphics Hardware", School of Computing The University of Utah December 2008. 27.P. P. Sun…et al. "3D B-Spline Surface Reconstruction Based on the Contour Information", International Conference on Information and Automation, IEEE, 2008. 28.S. Prakoonwit, & R. Benjamin "Optimal 3D surface reconstruction from multiview photographic images", International Conference on Cyber Worlds, 2009. 29.W.J. Wang et al. "The 3D Reconstruction Technology for the Bone SCT Image", Materials Science Forum Volumes 626 - 627, 2009. 30.B. S. Deng…et al. "A 3D Reconstruction Framework from Image Sequences Based on Point and Line Features", Advanced Materials Research Volumes 317 - 319, 2011. 31.S. T. Karris "Numerical Analysis Using MATLAB and Excel", Third Edition, Orchard Publications, 2007. 32.M. Alexa…et al. "Point Set Surfaces", IEEE Transactions on Visualization and Computer Graphics, Vol. 9, No.1, pp. 3–15, 2003. 33.S. C. Chapra "Applied Numerical Methods with MATLAB for Engineers and Scientists", Third Edition, ISBN 978-0-07-340110-2, 2012. 34.N. J. Pure & A. Sci "A Least Square Plane Surface Polynomial Fit of two Dimensional Potential Field Geographical Using MATLAB" Faculty of science, University of llorin Nigeria, , Vol. 21, pp. 20062012, 2006. 142
35.G. A. Ramos "Scattered Data Interpolation Using an Alternate Differential Equation Interpolant", M.Sc. thesis, Department of Computer Science University of Toronto, 2001. 36.A. Nealen "An As-Short-As-Possible Introduction to the Least Squares, Weighted Least Squares and Moving Least Squares Methods for Scattered Data Approximation and Interpolation", Technical Report, Provider cite seer, 2006. 37.B. Chen…et al. "On Characterization of Quadratic Splines", 2002. 38.Available at: http://igitur-archive.library.uu.nl/dissertations 39.D. Levy "Numerical Analysis” Department of Mathematics Stanford University" , 2005. 40.R. L. Burden, & J. D. Faires "Numerical Analysis", Wadsworth Group books/Cole, Seventh Edition, 2001. 41.C. Kruger "Constrained Cubic Spline Interpolatio for Chemical Engineering Applications", http://www.korf.co.uk/spline.pdf, 2000. 42.H. Moore, "MATLAB for Engineering", person international edition, Second Edition, 2009. 43.A. Gilat "MATLAB an Introduction with Applications", Department of Mechanical Engineering, The Ohio State University, John Wiley & Sons,INC, Third Edition, 2007. 44.J. Pan "Image Interpolation using Spline Curves", Dept.of Mechanical Engineering , MEC572 term paper, 2003. 45.H. T. Rathod a…et al. "On a New Cubic Spline Interpolation with Application to Quadrature", Int. Journal of Math. Analysis, Vol. 4, no. 28, pp. 1387 – 1415, 2010. 46.D.V. Rao…et al. "Implementation and Evaluation of Image Processing Algorithms on Reconfigurable Architecture using Cbased Hardware Descriptive Languages", International Journal of Theoretical and Applied Computer Sciences Volume 1, Number 1, pp. 9–34, 2006. 47.C. Gonzalez & R. E. Woods MedData "Digital Image Processing Using MATLAB", Second Edition Rafael Interactive, Steven L. Eddins The MathWorks, Inc. by Gatesmark, LLC, 2009.
143
48.A. McAndrew "An Introduction to Digital Image Processing with Matlab Notes for SCM2511 Image Processing 1", School of Computer Science and Mathematics ,Victoria University of Technology, 2004. 49.D. Doria "Image Processing Primer",
[email protected] December, 2008. 50.T. Seemann " Digital Image Processing using Local", Segmentation School of Computer Science and Software Engineering Faculty of Information Technology Monash University Australia. PhD. Thesis, 2002. 51.E.R. Davies "Machine Vision:Theory, Algorithms and, Practicalities", Academic Press, Chap. 4-5, 1990. 52.S. Ajaz…et al. " Design and Implementation of Edge Detection Algorithm in dsPIC Embedded Processor", School of Electrical Computer and Telecommunications Engineering University of Wollongong, North Wollongong, NSW 2522, Australia, 2010. 53.H. K. Anasuya DevI Fellow "Thresholding: A Pixel-Level Image Processing Methodology Preprocessing Technique for an OCR System for the Brahmi Script", National Institute of Advanced Studies IISc Campus, Bangalore-12 ancient asia journal of society of south Asian Archeology v1, 2006. 54. R. C. Gonzalez & R. E. Woods, "Digital Image Processing", Upper Saddle River, NJ: Prentice-Hall, pp. 572-585, 2001. 55.M. Sezgin & Bu¨ lent " Survey over image thresholding techniques and quantitative performance evaluation", Journal of Electronic Imaging 13(1),pp. 146–165, January, 2004. 56.W. Frei & C. Chen "Fast Boundary Detection: A Generalization and New Algorithm" , IEEE Trans. Computers, vol. C-26, no. 10, pp. 988998, Oct. 1977. 57.W. K. Pratt "Digital Image Processing", New York, NY, WileyInterscience, pp. 491-556, 2001. 58.R. Maini & Dr. H. Aggarwal "Study and Comparison of Various Image Edge Detection Techniques", Punjabi University -Patiala147002(Punjab), India International Journal of Image Processing (IJIP), Volume 3 : Issue (1), 2009. 144
59.R. C. Gonzalez and R. E. Woods "Digital Image Processing", 2nd. Prentice Hall, 2002. 60.J. Canny, "A computational approach to edge detection", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 8, No. 6, pp. 679698, Nov. 1986. 61.R. Muthukrishnan & M. Radha "Edge Detection Techniques for Image Segmentation" , International Journal of Computer Science & Information Technology (IJCSIT) Vol 3, No 6, Dec 2011. 62.P. Patidar…et al. "Image De-noising by Various Filters for Different Noise", International Journal of Computer Applications (0975 – 8887) Volume 9– No.4, November 2010. 63.GC. Loney & TM. Ozsoy "NC machining of free form surfaces. Computer-Aided Design", pp.85-90, 1987. 64.G. Elber & E. Cohen "Toolpath generation for freeform surface models", Computer-Aided Design, 1994. 65.S. Yuwen…et al. "Iso-parametric tool path generation from triangular meshes for free-form surface machining", International Journal of Advanced Manufacturing Technology, pp.721–726, 2006. 66.W. He… et al. "Iso-parametric CNC tool path optimization based on adaptive grid generation", International Journal of Advanced Manufacturing Technology, pp.538–548, 2009. 67.Y-K. Choi & A. Banerjee. "Tool path generation and tolerance analysis for free-form surfaces", International Journal of Machine Tools and Manufacture" Volume 47, Issues 3–4, pp. 689–696, March 2007.
145
Appendix A
00933 X36.7347 Y60.086 Z-8.7904
01759 X71.4285 Y86.1232 Z-11.208
00934 X36.7347 Y62.0889 Z-8.6762
01760 X71.4285 Y84.1203 Z-11.4337
Parts of part program to implement sample 3 00935 X36.7347 Y64.0917 Z-8.5579
01761 X71.4285 Y82.1175 Z-11.6513
00936 X36.7347 Y66.0946 Z-8.4489
.
00001 G90 G21
00937 X36.7347 Y68.0975 Z-8.3343
.
00002 M03 S1500
00938 X36.7347 Y70.1003 Z-8.2265
.
00003 G01 X0.00 Y0.0003 Z-7.7152 F1500
00939 X36.7347 Y72.1032 Z-8.1192
00004 X0.00 Y2.0032 Z-7.5154
00940 X36.7347 Y74.106 Z-8.0113
01801 X71.4285 Y2.0032 Z-1.4806
00005 X0.00 Y4.006 Z-7.2205
.
01802 X71.4285 Y0.0003 Z-0.6003
00006 X0.00 Y6.0089 Z-6.9721
.
01803 X73.4693 Y0.0003 Z-0.3336
00007 X0.00 Y8.0117 Z-6.7447
.
01804 X73.4693 Y2.0032 Z-1.2614
00008 X0.00 Y10.0146 Z-6.5128
01001 X38.7755 Y2.0032 Z-9.8686
01805 X73.4693 Y4.006 Z-2.2988
00009 X0.00 Y12.0174 Z-6.3384
01002 X38.7755 Y0.0003 Z-9.7524
01806 X73.4693 Y6.0089 Z-3.2442
00010 X0.00 Y14.0203 Z-6.1535
01003 X40.8163 Y0.0003 Z-9.1601
01807 X73.4693 Y8.0117 Z-4.1369
.
01004 X40.8163 Y2.0032 Z-9.3183
01808 X73.4693 Y10.0146 Z-4.9763
.
01005 X40.8163 Y4.006 Z-9.4697
01809 X73.4693 Y12.0174 Z-5.7751
.
01006 X40.8163 Y6.0089 Z-9.6036
01810 X73.4693 Y14.0203 Z-6.5237
00090 X2.0408 Y24.0346 Z-6.366
01007 X40.8163 Y8.0117 Z-9.7218
.
00091 X2.0408 Y22.0317 Z-6.4937
01008 X40.8163 Y10.0146 Z-9.8337
.
00092 X2.0408 Y20.0289 Z-6.6248
01009 X40.8163 Y12.0174 Z-9.918
.
00093 X2.0408 Y18.026 Z-6.7826
01010 X40.8163 Y14.0203 Z-9.9984
02475 X100.00 Y54.0775 Z-21.5698
00094 X2.0408 Y16.0232 Z-6.9488
.
02476 X100.00 Y52.0746 Z-21.7948
00095 X2.0408 Y14.0203 Z-7.127
.
02477 X100.00 Y50.0717 Z-21.98003
00096 X2.0408 Y12.0174 Z-7.3318
.
02478 X100.00 Y48.0689 Z-22.116
00097 X2.0408 Y10.0146 Z-7.5305
02479 X100.00 Y46.066 Z-22.19964
00098 X2.0408 Y8.0117 Z-7.7764
01750 X69.3877 Y94.1346 Z-10.053
02480 X100.00 Y44.0632 Z-22.23845
00099 X2.0408 Y6.0089 Z-8.0211
01751 X69.3877 Y96.1375 Z-9.8056
02481 X100.00 Y42.0603 Z-22.
00100 X2.0408 Y4.006 Z-8.2847
01752 X69.3877 Y98.1403 Z-9.5524
02490 X100.00 Y24.0346 Z-19.269
.
01753 X71.4285 Y98.1403 Z-9.6753
02491 X100.00 Y22.0317 Z-18.5862
.
01754 X71.4285 Y96.1375 Z-9.9448
02492 X100.00 Y20.0289 Z-17.8475
.
01755 X71.4285 Y94.1346 Z-10.2131
02500 X100.00 Y4.006 Z-8.8211
00930 X36.7347 Y54.0775 Z-9.1311
01756 X71.4285 Y92.1317 Z-10.4738
02501 X100.00 Y2.0032 Z-7.2068
00931 X36.7347 Y56.0803 Z-9.0207
01757 X71.4285 Y90.1289 Z-10.7269
02503 M30
00932 X36.7347 Y58.0832 Z-8.9048
01758 X71.4285 Y88.126 Z-10.9682
146
Appendix B Parts of origin data for sample 1 X 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514 14.286 16.32686 . . . 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514 14.286 16.32686 . . . 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514 14.286 16.32686
Y 0 0 0 0 0 0 0 0 0 . . . 2.040857 2.040857 4.081714 4.081714 4.081714 4.081714 4.081714 4.081714 4.081714 4.081714 4.081714 . . . 6.122571 6.122571 8.163429 8.163429 8.163429 8.163429 8.163429 8.163429 8.163429 8.163429 8.163429
Z -32.4225 -32.2794 -32.1207 -31.9512 -31.7614 -31.5692 -31.3494 -31.1242 -30.8933 . . . -32.133 -32.2794 -32.1207 -31.9463 -31.7335 -31.507 -31.2619 -30.9974 -30.7157 -30.4172 -30.1039 . . . -31.7487 -31.9512 -31.7619 -31.5346 -31.2619 -30.9685 -30.6508 -30.3094 -29.944 -29.5579 -29.1528
X 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514 . . . 14.286 16.32686 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514 . . . 14.286 16.32686 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514
Y 10.20429 10.20429 12.24514 12.24514 12.24514 12.24514 12.24514 12.24514 12.24514 . . . 12.24514 12.24514 14.286 14.286 16.32686 16.32686 16.32686 16.32686 16.32686 16.32686 16.32686 . . . 16.32686 16.32686 18.36771 18.36771 20.40857 20.40857 20.40857 20.40857 20.40857 20.40857 20.40857
147
Z -31.3043 -31.5692 -31.3504 -31.059 -30.7157 -30.3455 -29.944 -29.5128 -29.0512 . . . -28.5634 -28.0517 -30.7986 -31.1242 -30.8939 -30.5265 -30.1039 -29.6483 -29.1528 -28.6216 -28.0517 . . . -27.4501 -26.8196 -30.2424 -30.6395 -30.3951 -29.9506 -29.4442 -28.8955 -28.2992 -27.659 -26.9733
X 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514 . . . 14.286 16.32686 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514 . . . 14.286 16.32686 97.96114 100.002 0 2.040857 4.081714 6.122571 8.163429 10.20429 12.24514
Y 38.77629 38.77629 40.81714 40.81714 40.81714 40.81714 40.81714 40.81714 40.81714 . . . 40.81714 40.81714 59.18486 59.18486 61.22571 61.22571 61.22571 61.22571 61.22571 61.22571 61.22571 . . . 61.22571 61.22571 79.59343 79.59343 81.63429 81.63429 81.63429 81.63429 81.63429 81.63429 81.63429
Z -27.5246 -28.2861 -28.1368 -27.3475 -26.4582 -25.4881 -24.4354 -23.3028 -22.092 . . . -20.8116 -19.4676 -27.3475 -28.1368 -28.2861 -27.5246 -26.6613 -25.7194 -24.698 -23.5987 -22.4239 . . . -21.1809 -19.877 -29.9506 -30.3951 -30.6395 -30.2424 -29.7789 -29.2764 -28.7316 -28.1461 -27.5197
Appendix C Parts of origin data for sample 2 X 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449 14.28571 16.32653 . . . 97.95918 100 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449 14.28571 16.32653 . . . 97.95918 100 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449 14.28571
Y 0 0 0 0 0 0 0 0 0 . . . 2.040816 2.040816 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633 . . . 6.122449 6.122449 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265
Z -17.31 -17.31 -17.31 -17.31 -17.31 -17.31 -17.31 -17.31 -17.31 . . . -18.1728 -18.228 -17.31 -17.31 -17.31 -17.3102 -17.3107 -17.3116 -17.313 -17.3149 -17.3175 . . . -19.8897 -20.0534 -17.31 -17.3095 -17.3093 -17.3093 -17.31 -17.3114 -17.3137 -17.3172
X 97.95918 100 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449 . . . 16.32653 97.95918 100 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449 14.28571 . . . 16.32653 97.95918 100 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449
Y 6.122449 6.122449 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265 . . . 8.163265 38.77551 38.77551 40.81633 40.81633 40.81633 40.81633 40.81633 40.81633 40.81633 40.81633 . . . 40.81633 59.18367 59.18367 61.22449 61.22449 61.22449 61.22449 61.22449 61.22449 61.22449
148
Z -19.8897 -20.0534 -17.31 -17.3095 -17.3093 -17.3093 -17.31 -17.3114 -17.3137 . . . -17.322 -31.1427 -32.1261 -17.3096 -17.2477 -17.1864 -17.1269 -17.0702 -17.0173 -16.969 -16.9264 . . . -16.8903 -33.2073 -34.6034 -17.3085 -17.0995 -16.8904 -16.684 -16.482 -16.2855 -16.096
X 97.95918 100 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449 . . . 16.32653 100 0 2.040816 4.081633 6.122449 8.163265 10.20408 12.2449 14.28571 16.32653 . . . 79.59184 81.63265 83.67347 85.71429 87.7551 89.79592 91.83673 93.87755 95.91837 97.95918
Y 79.59184 79.59184 81.63265 81.63265 81.63265 81.63265 81.63265 81.63265 81.63265 . . . 81.63265 95.91837 97.95918 97.95918 97.95918 97.95918 97.95918 97.95918 97.95918 97.95918 97.95918 . . . 100 100 100 100 100 100 100 100 100 100
Z -28.7527 -30.4178 -17.3052 -16.8109 -16.3134 -15.8197 -15.3317 -14.8511 -14.3799 . . . -13.4732 -20.7555 -17.3085 -16.4472 -15.5864 -14.7303 -13.8808 -13.0404 -12.2111 -11.3954 -10.5954 . . . -4.20504 -5.0767 -6.03979 -7.09654 -8.24923 -9.50017 -10.8517 -12.306 -13.8654 -15.5326
Appendix D Parts of origin data for sample 3 X 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 6.073798 . . . 100 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 6.073798 . . . 100 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 6.073798
Y 0 0 0 0 0 0 0 0 0 . . . 0 0.757602 0.757602 0.757602 0.757602 0.757602 0.757602 0.757602 0.757602 0.757602 . . . 0.757602 1.515358 1.515358 1.515358 1.515358 1.515358 1.515358 1.515358 1.515358 1.515358
Z 45 44.2925 43.62403 42.99385 42.4012 41.84532 41.32546 40.84086 40.39078 . . . 46 45.0609 44.35516 43.68818 43.0592 42.46749 41.91228 41.39282 40.90837 40.45817 . . . 45.36915 45.12232 44.41847 43.7531 43.12545 42.53479 41.98036 41.46142 40.97721 40.52699
X 100 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 . . . 100 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 6.073798 . . . 100 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 6.073798
Y 6.836965 7.601523 7.601523 7.601523 7.601523 7.601523 7.601523 7.601523 7.601523 . . . 14.57066 15.35764 15.35764 15.35764 15.35764 15.35764 15.35764 15.35764 15.35764 15.35764 . . . 22.59809 23.42293 23.42293 23.42293 23.42293 23.42293 23.42293 23.42293 23.42293 23.42293
149
Z 40.72352 45.62224 44.93831 44.29042 43.67786 43.0999 42.55584 42.04495 41.56652 . . . 35.94049 46.21913 45.57281 44.95905 44.37718 43.82654 43.30646 42.81626 42.3553 41.92288 . . . 32.32988 46.70409 46.11074 45.54595 45.00912 44.49965 44.01693 43.56037 43.12936 42.7233
X 100 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 . . . 100 0 0.757602 1.515358 2.273423 3.031952 3.791099 4.55102 5.311868 6.073798 . . . 85.58795 86.96469 88.35534 89.76006 91.17901 92.61233 94.06018 95.52272 97.0001 98.49247
Y 40.15247 41.09944 41.09944 41.09944 41.09944 41.09944 41.09944 41.09944 41.09944 . . . 85.58795 86.96469 86.96469 86.96469 86.96469 86.96469 86.96469 86.96469 86.96469 86.96469 . . . 100 100 100 100 100 100 100 100 100 100
Z 28.77433 46.99196 46.55579 46.13859 45.73992 45.35937 44.99649 44.65088 44.32209 . . . 37.3648 42.41548 42.53516 42.64768 42.75315 42.85169 42.94341 43.02843 43.10688 43.17886 . . . 41.4752 41.57776 41.69067 41.81421 41.94866 42.09429 42.25139 42.42024 42.6011 42.79426
Appendix E Parts of origin data for sample 4 X 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 1.632653 . . . 10 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 1.632653 . . . 10 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 1.632653
Y 0 0 0 0 0 0 0 0 0 . . . 0.204082 0.408163 0.408163 0.408163 0.408163 0.408163 0.408163 0.408163 0.408163 0.408163 . . . 0.612245 0.816327 0.816327 0.816327 0.816327 0.816327 0.816327 0.816327 0.816327 0.816327
Z 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 2.25 . . . 1.35036 2.25 2.250043 2.25 2.249784 2.249309 2.248488 2.247235 2.24493 2.242389 . . . -0.44028 2.25 2.250432 2.250691 2.250605 2.25 2.248704 2.246544 2.242366 2.23763
X 10 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 . . . 10 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 1.632653 . . . 10 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 1.632653
Y 1.020408 1.22449 1.22449 1.22449 1.22449 1.22449 1.22449 1.22449 1.22449 . . . 1.836735 2.040816 2.040816 2.040816 2.040816 2.040816 2.040816 2.040816 2.040816 2.040816 . . . 3.877551 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633 4.081633
150
Z -2.205 2.25 2.251512 2.252765 2.253499 2.253456 2.252376 2.25 2.244813 . . . -5.79008 2.25 2.25768 2.264917 2.271268 2.276291 2.279543 2.280581 2.27809 2.272527 . . . -12.6306 2.25 2.311881 2.372877 2.432102 2.488669 2.541695 2.590292 2.643462 2.678859
X 10 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 . . . 10 0 0.204082 0.408163 0.612245 0.816327 1.020408 1.22449 1.428571 1.632653 . . . 8.163265 8.367347 8.571429 8.77551 8.979592 9.183673 9.387755 9.591837 9.795918 10
Y 7.959184 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265 8.163265 . . . 9.591837 9.795918 9.795918 9.795918 9.795918 9.795918 9.795918 9.795918 9.795918 9.795918 . . . 10 10 10 10 10 10 10 10 10 10
Z -10.9143 2.25 2.736916 3.222071 3.703705 4.180059 4.64937 5.10988 5.670457 . . . -1.13688 2.25 3.09672 3.941323 4.781693 5.615712 6.441264 7.256232 8.256833 9.040248 . . . 14.56455 13.62677 12.59881 11.18376 9.945 8.60904 7.17372 5.63688 3.99636 2.25