Haptic Modeling for Virtual Manufacturing

0 downloads 0 Views 6MB Size Report
Jul 14, 2008 - Among the point-based methods, the weakness of proxy-based ... surface-based haptic rendering, both a distance-based method and a simulation-based ...... and applied forces to the master to generate haptic feedback. ..... degradation in the haptic perception of contacts compared to existing collision.
Title

Author(s)

Haptic modeling for virtual manufacturing

He, Xuejian; 何學儉

Citation

Issue Date

URL

Rights

2008

http://hdl.handle.net/10722/51889

unrestricted

Haptic Modeling for Virtual Manufacturing

by Xuejian He Department of Mechanical Engineering The University of Hong Kong

A thesis submitted in partial fulfillment of the requirement for the Degree of Doctor of Philosophy at the University of Hong Kong

July 2008

1

Abstract of thesis entitled

HAPTIC MODELING FOR VIRTUAL MANUFACTURING Submitted by Xuejian He for the degree of Doctor of Philosophy at The University of Hong Kong in July 2008

In this thesis, original haptic modeling methods for virtual manufacturing are presented. The proposed haptic-based virtual manufacturing applications include reverse engineering, virtual machining, and virtual robotic assembly based on haptic virtual tele-operation. Three kinds of haptic rendering methods for virtual manufacturing, namely point-based, surface-based and volume-based haptic rendering methods are presented. Among the point-based methods, the weakness of proxy-based constraint methods is investigated and a force-based snap algorithm is proposed in order to overcome such weakness. For better fidelity and fast haptic rendering, a back-propagation (BP) neural network is proposed to model the cutting forces of turning operation. In surface-based haptic rendering, both a distance-based method and a simulation-based method are proposed. In the simulation-based method, various dynamic interactions through a haptic device can be simulated. 6-DOF haptic rendering algorithms are I

proposed and extended into shared virtual environments to improve the haptic interaction over network. In volume-based haptic rendering, the influence of the volumetric resolution and erosion to drilling forces are investigated. The relations between drilling forces and tool parameters with respect to certain cutting conditions are also studied. Based on these haptic rendering algorithms, various manufacturing processes are simulated. Such simulations normally start from a three-dimensional computer model that can be acquired through reverse engineering. However, the computer models from reverse engineering often have problems such as holes or gaps. Haptic techniques based on the proposed mesh-based and volume-based methods are proposed to deal with such problems so that computer models can be reconstructed. Several examples are given to illustrate the effectiveness of the proposed methods. The strength and weakness of the proposed methods are also outlined. When a three-dimensional computer model is available, it can be made by machining processes, such as turning, grinding and drilling. Haptic simulation of such processes can significantly improve the efficiency of product development. The proposed haptic turning system provides an intuitive way not only for the modeling of three-dimensional revolved objects but also for the training of lathe machining and grinding operations. Haptic simulation of drilling based on the volumetric method can not only be used for machine training, but also has potential in medical procedure training or preoperative planning. When all the components of a product design are available, assembly processes will follow. In this research, a methodology of haptic-based tele-operation for virtual robot assembly and planning is proposed. It provides a convenient tool for offline robot planning. Based on the method, a prototype system for robotic path II

following, virtual robotic assembly, and robotic path planning is implemented. In the robotic path following case study, it is shown that significant improvement in users’ performance can be obtained when haptics and constraints are added. In the virtual robotic assembly task, the forces and torques provided by a haptic interface can improve users’ performance. In the robotic path planning system, the advantages of human’s instinct and powerful computation of computers are utilized to develop a semi-automatic path planner that can generate a user-preferred path and improve planning efficiency.

III

Haptic Modeling for Virtual Manufacturing

by Xuejian He Department of Mechanical Engineering The University of Hong Kong

A thesis submitted in partial fulfillment of the requirement for the Degree of Doctor of Philosophy at the University of Hong Kong

July 2008

IV

DECLARATION

I hereby declare that the Ph.D. thesis entitled “Haptic Modeling for Virtual Manufacturing” represents my own work, except where due acknowledgement is made, and that it has not been previously included in a thesis, dissertation or report submitted to this University or to any other institution for a degree, diploma or other qualification.

Signature: __________________ Xuejian He July 2008

V

ACKNOWLEDGEMENTS

I am deeply indebted to my supervisor Dr. Y. H. Chen whose help, stimulating suggestions and encouragement helps me in all the time of PhD study. I am inspired by his wisdom and attitude towards work and life, which will be my life-long treasures.

I want to thank the Department of Mechanical Engineering for giving me the permission to conduct the research work and commence this thesis.

I would like to give my sincere appreciation to Dr. Zhengyi Yang, Dr. Lili Lian, Mr. Libo Tang, Mr. Ruihua Ye, Mr. Yongxiao Fu. They share valuable research experience with me with no reservation.

I take this opportunity to express my special thanks to my wife, Sanna Chu. Without her support and understanding, this thesis would not come into being. I am grateful to her love, encouragement, and patience to me. Great thanks would go to my parents for their support to my pursuit of PhD.

Finally I would like to thank all my friends who help and come along with me through the years.

VI

TABLE OF CONTENTS

ABSTRACT OF THESIS ENTITLED .......................................................................I DECLARATION...................................................................................................... V ACKNOWLEDGEMENTS ..................................................................................... VI TABLE OF CONTENTS........................................................................................VII LIST OF FIGURES ................................................................................................XII LIST OF TABLES ............................................................................................... XIX LIST OF ABBREVIATIONS .................................................................................XX CHAPTER 1. 1.1

INTRODUCTION .......................................................................... 1 BASIC CONCEPTS .............................................................................................. 1

1.1.1

Definition of Haptic, haptics, haptic interface, and haptic rendering...... 1

1.1.2

Definition of virtual manufacturing ........................................................ 2

1.1.3

The scope of VM..................................................................................... 2

1.1.4

The technology of VR in VM ................................................................. 3

1.2

MOTIVATION .................................................................................................... 5

1.2.1

Haptic technology for product design in reverse engineering (RE) ........ 6

1.2.2

Haptic-based virtual machining .............................................................. 8

1.2.3

Haptic-based virtual tele-operation ......................................................... 9

1.2.4

Haptic-aided robotic path planning....................................................... 10

1.3

RESEARCH ISSUES .......................................................................................... 10

1.3.1

Haptic rendering methods for VM ........................................................ 11

1.3.2

6-degree-of-Freedom (6-DOF) haptic rendering................................... 11

1.3.3

Dynamic object modeling ..................................................................... 12

VII

1.3.4

Virtual tele-operation mechanism for robotic off-line programming ... 13

1.4

OBJECTIVES .................................................................................................... 13

1.5

THESIS OUTLINE ............................................................................................. 14

CHAPTER 2. 2.1

LITERATURE REVIEW ............................................................. 17 HAPTIC INTERFACES....................................................................................... 19

2.1.1

Major functions and features................................................................. 19

2.1.2

Types of haptic interfaces ..................................................................... 20

2.2

HAPTIC RENDERING METHODS ....................................................................... 27

2.2.1

Time-critical collision detection for haptic rendering........................... 29

2.2.2

3-DOF haptic rendering ........................................................................ 33

2.2.3

6-DOF haptic rendering ........................................................................ 35

2.3

HAPTIC-BASED VIRTUAL MANUFACTURING .................................................. 38

2.4

TACTILE-BASED VIRTUAL MANUFACTURING ................................................. 43

CHAPTER 3. 3.1

HAPTIC RENDERING METHODS ............................................ 46 POINT-BASED HAPTIC RENDERING ................................................................. 47

3.1.1

Constraints-based methods.................................................................... 47

3.1.2

Spring-damper model............................................................................ 53

3.1.3

Cutting force model for single-point tools ............................................ 53

3.2

SURFACE-BASED HAPTIC RENDERING ............................................................ 66

3.2.1

Collision detection ................................................................................ 67

3.2.2

Distance-based haptic rendering ........................................................... 75

3.2.3

Simulation-based haptic rendering........................................................ 77

3.3

VOLUME-BASED HAPTIC RENDERING ............................................................. 91

3.3.1

Resolution of volumetric model............................................................ 92

3.3.2

Collision detection and volumetric interaction ..................................... 96

3.3.3

The erosion model................................................................................. 97

3.3.4

Force modeling ..................................................................................... 99

VIII

3.3.5 3.4

CONCLUSIONS AND DISCUSSIONS ................................................................ 103

CHAPTER 4. 4.1

Torque modeling ................................................................................. 102

HAPTIC-AIDED REVERSE ENGINEERING........................... 105 TRIANGULAR MESH-BASED HOLE FILLING ................................................... 109

4.1.1

The hole-filling process....................................................................... 110

4.1.2

Hole identification............................................................................... 111

4.1.3

Smoothing ........................................................................................... 114

4.1.4

Stitching .............................................................................................. 116

4.1.5

Polygon triangulation method ............................................................. 117

4.1.6

Triangle subdivision method............................................................... 119

4.1.7

Surface sculpting................................................................................. 120

4.1.8

Implementation and case studies......................................................... 121

4.2

VOLUME-BASED HOLE FILLING .................................................................... 124

4.3

CONCLUSIONS .............................................................................................. 131

CHAPTER 5. 5.1

HAPTIC-AIDED VIRTUAL MACHINING............................... 133 HAPTIC-AIDED VIRTUAL TURNING ............................................................... 134

5.1.1

Tool modeling ..................................................................................... 137

5.1.2

Workpiece rendering........................................................................... 141

5.1.3

System implementation....................................................................... 145

5.1.4

Discussions and Conclusions .............................................................. 146

5.2

HAPTIC-AIDED VIRTUAL DRILLING .............................................................. 148

5.2.1

Part modeling ...................................................................................... 148

5.2.2

Graphical rendering............................................................................. 152

5.2.3

Implementation ................................................................................... 157

5.2.4

A case study of bone drilling simulation............................................. 159

5.2.5

Discussion and conclusions................................................................. 161

CHAPTER 6.

HAPTIC-BASED VIRTUAL TELE-OPERATION.................... 162 IX

6.1

MODELING A ROBOT .................................................................................... 164

6.1.1

Geometrical modeling......................................................................... 164

6.1.2

Physical modeling ............................................................................... 166

6.1.3

Robot kinematics................................................................................. 167

6.2

VIRTUAL TELE-OPERATION .......................................................................... 169

6.3

IMPLEMENTATION ........................................................................................ 173

6.4

HAPTIC-AIDED PATH FOLLOWING WITH VIRTUAL CONSTRAINTS ................ 174

6.5

HAPTIC-AIDED VIRTUAL ASSEMBLY ............................................................ 177

6.6

HAPTIC-AIDED VIRTUAL BONE FRACTURE REDUCTION ............................... 181

6.6.1

System overview ................................................................................. 185

6.6.2

Fracture modeling ............................................................................... 186

6.6.3

Haptic modeling .................................................................................. 195

6.6.4

Implementation and results ................................................................. 199

6.6.5

Conclusions and discussions ............................................................... 201

6.7

HAPTIC-AIDED ROBOTIC PATH PLANNING .................................................... 202

6.7.1

Related work ....................................................................................... 206

6.7.2

Path planning....................................................................................... 208

6.7.3

Conclusions and discussions ............................................................... 220

6.8

CHAPTER 7.

CONCLUSIONS AND DISCUSSIONS ................................................................ 221

CONCLUSIONS AND FUTURE WORK .................................. 223

7.1

SUMMARY OF CONTRIBUTIONS .................................................................... 223

7.2

FUTURE WORK.............................................................................................. 226

(1)

Improvement of haptic-aided reverse engineering .................................. 227

(2)

Improvement of haptic-aided virtual machining ..................................... 227

(3)

Improvement of haptic-aided virtual tele-operation ................................ 228

APPENDIX A: FORCE AND TORQUE MODELING FOR DRILLING SIMULATION ……………………………………………………………………229 X

APPENDIX B: EXPLANATION OF PATH SAFETY INDEX ............................. 231 LIST OF PUBLICATIONS ................................................................................... 233 REFERENCES ……………………………………………………………………235

XI

LIST OF FIGURES Figure 1-1 The technology of VR in VM. .................................................................. 4 Figure 1-2 The haptic-aided VM................................................................................ 6 Figure 1-3 Thesis organization................................................................................. 16 Figure 2-1 Haptics and its related disciplines. .......................................................... 17 Figure 2-2 General structure of human-haptic interaction......................................... 18 Figure 2-3 Haptic applications. ................................................................................ 19 Figure 2-4 ForceTM 3D Pro (Photograph courtesy of Logitech®). ............................. 22 Figure 2-5 PHANTOM®. (Photograph courtesy of SensAble Technologies®). ......... 23 Figure 2-6 Haptic devices of Force Dimension® (Photograph courtesy of Force Dimension®). ................................................................................................... 24 Figure 2-7 Haptic devices of Haption® (Photograph courtesy of Haption®). ............. 24 Figure 2-8 5-DOF Haptic System of Quanser® (Photograph courtesy of Quanser®).. 25 Figure 2-9 Haptic devices of MPB® Technologies (Photograph courtesy of MPB® Technologies). ................................................................................................. 25 Figure 2-10 Novint® FalconTM (Photograph courtesy of Novint®). ........................... 26 Figure 2-11 Exoskeleton haptic devices of Immersion® (Photograph courtesy of Immersion®). ................................................................................................... 27 Figure 2-12 Local minimum distances. .................................................................... 30 Figure 2-13 Voxmap colliding with point shell (Courtesy of McNeely et al.)........... 31 Figure 2-14 Architecture of H-COLLIDE (Courtesy of Gregory et al.). ................... 32

XII

Figure 2-15 Sensation-preserving multi-resolution collision detection (Courtesy of Otaduy and Lin)............................................................................................... 32 Figure 2-16 God object. ........................................................................................... 33 Figure 2-17 Virtual proxy. ....................................................................................... 34 Figure 2-18 6-DOF haptic rendering based on LMD. (Courtesy of Johnson et al.) ... 36 Figure 2-19 “Virtual coupling” for stable 6-DOF haptic rendering. (Courtesy of Colgate et al.)................................................................................................... 37 Figure 2-20 Haptic sculpting of dynamic surfaces. (Image courtesy of Frank Dachille, Qin et al. 1999) ................................................................................................ 39 Figure 2-21 A virtual clay system. (Photograph courtesy of McDonnell, Qin et al. 2001) ............................................................................................................... 40 Figure 2-22 Haptic volume-removing system for reverse engineering. (Photograph courtesy of Yang and Chen)............................................................................. 41 Figure 2-23 Hapitc-aided virtual prototyping system. (Photograph courtesy of Chen and Yang) ........................................................................................................ 41 Figure 2-24 HVCMM. (Photograph courtesy of Chen, Wang et al.) ......................... 42 Figure 2-25 Haptic-aided functional analysis system. (Photograph courtesy of Yang, Lian et al.) ....................................................................................................... 43 Figure 2-26 VADE. (Courtesy of Jayaram, Jayaram et al. 1999) .............................. 45 Figure 3-1 The principle of point constraint. ............................................................ 48 Figure 3-2 Constraint force models.......................................................................... 49 Figure 3-3 An example of multiple constraints......................................................... 49 Figure 3-4 Line and triangle constraints. .................................................................. 50 Figure 3-5 Multiple constraints of force-based method. ........................................... 52 Figure 3-6 Force model for touchable surfaces......................................................... 53 XIII

Figure 3-7 Cutting tool model and forces acting on cutting tool. .............................. 54 Figure 3-8 Training an ANN.................................................................................... 56 Figure 3-9 Diagram of a neuron............................................................................... 56 Figure 3-10 Sigmoid transfer function. .................................................................... 56 Figure 3-11 Structure of multiple-layer ANN........................................................... 57 Figure 3-12 Structure of the two-layer ANN for cutting force model. ...................... 58 Figure 3-13 Mean squared error vs. epoch in ANN training. .................................... 63 Figure 3-14 Cutting force vs. feed rate. .................................................................... 63 Figure 3-15 Cutting force vs. cutting depth. ............................................................. 64 Figure 3-16 Cutting force vs. cutting velocity. ......................................................... 64 Figure 3-17 Block diagram of cutting force simulation using BP network................ 65 Figure 3-18 Data structure of collision detection...................................................... 69 Figure 3-19 Distance computation. .......................................................................... 70 Figure 3-20 A virtual environment for testing collision detection............................. 72 Figure 3-21 Collision response. ............................................................................... 74 Figure 3-22 Schematic of direct haptic rendering method. ....................................... 75 Figure 3-23 Plots of forces and torques recorded during robot manipulation. ........... 76 Figure 3-24 Flow chart of simulation-based haptic rendering. .................................. 77 Figure 3-25 Joint coupling. ...................................................................................... 79 Figure 3-26 Six joints of the haptic interface............................................................ 79 Figure 3-27 Calculated motor DAC values for the 6 robot joints by joint coupling method............................................................................................................. 81 Figure 3-28 Simulation of time delay of communication.......................................... 82 Figure 3-29 Remote virtual coupling........................................................................ 84 Figure 3-30 Block diagram of RVC. ........................................................................ 85 XIV

Figure 3-31 Direct proxy rendering.......................................................................... 86 Figure 3-32 Experiments of haptic interactions. ....................................................... 86 Figure 3-33 Plots of forces and torques calculated by direct proxy rendering method during two times touch..................................................................................... 88 Figure 3-34 Plots of the force and calculated by local virtual coupling method. ....... 89 Figure 3-35 Experiments of haptic interactions through Local Area Network........... 89 Figure 3-36 Plots of forces calculated by LVC and RVC. ........................................ 90 Figure 3-37 Schematics of drilling tool. ................................................................... 92 Figure 3-38 The influence of model resolution to force modeling. ........................... 94 Figure 3-39 Schematics of collision detection and force calculation......................... 96 Figure 3-40 Erosion rate vs. drilling force................................................................ 99 Figure 3-41 Plots of force vs. tool radius and feed rate........................................... 101 Figure 3-42 Plots of torque vs. tool radius and feed rate......................................... 103 Figure 4-1 VIVID 700. .......................................................................................... 106 Figure 4-2 The scanned models with holes and gaps. ............................................. 107 Figure 4-3 Comparison between the automatic and our haptic-guided hole-filling methods. ........................................................................................................ 108 Figure 4-4 The haptic-aided hole-filling system configuration. .............................. 110 Figure 4-5 Flowchart of haptic hole filling............................................................. 111 Figure 4-6 Hole boundaries. (a) hole boundaries marked in green colour; (b) “islands and peninsulas” in a complex hole. ............................................................... 112 Figure 4-7 Point selecting method.......................................................................... 116 Figure 4-8 Stitching operation. .............................................................................. 117 Figure 4-9 Boundary triangle subdivision. ............................................................. 120 Figure 4-10 Sculpting operation............................................................................. 121 XV

Figure 4-11 Hole filling for a spine model. ............................................................ 122 Figure 4-12 Hole filling for a jawbone model. ....................................................... 122 Figure 4-13 A fighter model repaired by our system. ............................................. 123 Figure 4-14 A snoopy model repaired by our system. ............................................ 123 Figure 4-15 Flow chart of volume-based hole filling.............................................. 124 Figure 4-16 Hole filling using triangulation in FreeFormTM. .................................. 125 Figure 4-17 Self-intersection in complex hole filling by triangulation. ................... 126 Figure 4-18 Volumetric models with different resolutions. .................................... 127 Figure 4-19 A spine model with debris. ................................................................. 127 Figure 4-20 Tools of FreeFormTM. ......................................................................... 128 Figure 4-21 Models modified and re-designed in FreeFormTM. ............................. 130 Figure 5-1 The HVTOS system setup. ................................................................... 137 Figure 5-2 Software interface of HVTOS............................................................... 138 Figure 5-3 Grinding tool collision model. .............................................................. 140 Figure 5-4 Grinding rendering. .............................................................................. 140 Figure 5-5 Revolution surface................................................................................ 143 Figure 5-6 Control point repositioning. .................................................................. 144 Figure 5-7 Models machined by HVOTS: (a) wine glass; (b) ball; (c) gourd; (d) weight............................................................................................................ 146 Figure 5-8 An example of force curve for evaluation. ............................................ 147 Figure 5-9 Ray-casting for model voxelization. ..................................................... 150 Figure 5-10 Model voxelization method. ............................................................... 152 Figure 5-11 Volume rendering methods................................................................. 153 Figure 5-12 Flow chart of volume rendering methods using local Marching Cubes algorithm. ...................................................................................................... 155 XVI

Figure 5-13 Schematic of local Marching Cubes algorithm. ................................... 156 Figure 5-14 Local volumetric data are modified and Marching Cubes algorithm is used for efficient graphic rendering of drilling. .............................................. 157 Figure 5-15 System software structure. .................................................................. 158 Figure 5-16 Flow chart of program. ....................................................................... 158 Figure 5-17 Bone drilling simulation setup. ........................................................... 159 Figure 6-1 Schematic of haptic tele-operation system. ........................................... 163 Figure 6-2 Block diagram of virtual tele-operation system structure....................... 164 Figure 6-3 Robot geometric modeling.................................................................... 165 Figure 6-4 schematic of six joints of robot. ............................................................ 166 Figure 6-5 Schematic of the haptic interface. ......................................................... 170 Figure 6-6 Two methods of virtual tele-operation. ................................................. 171 Figure 6-7 Software structure................................................................................. 173 Figure 6-8 Screen shot of haptic-aided path following. .......................................... 175 Figure 6-9 Path following without guidance of haptics and virtual constraints. ...... 176 Figure 6-10 Path following with haptic guidance. .................................................. 176 Figure 6-11 Path following with guidance of haptics and virtual constraints. ......... 177 Figure 6-12 Photograph of haptic-aided virtual assembly....................................... 178 Figure 6-13 Screen snapshots of the haptic virtual assembly system. ..................... 180 Figure 6-14 Schematic of haptic-aided virtual bone fracture reduction system. ...... 185 Figure 6-15 Block diagram of virtual femoral fracture reduction system. ............... 186 Figure 6-16 3D femur and soft tissue reconstruction from CT data using Mimics®. ...................................................................................................................... 188 Figure 6-17 Schematic of femoral fracture types.................................................... 189 Figure 6-18 Femoral fracture geometric mimicking using FreeFormTM. ................. 191 XVII

Figure 6-19 Spatial relationship between fracture fragments. ................................. 191 Figure 6-20 Approximation of femoral axis. .......................................................... 193 Figure 6-21 Manual selection of point-point pairs for fracture matching. ............... 194 Figure 6-22 Length-tension curve of muscle. ......................................................... 197 Figure 6-23 Plots of reduction force....................................................................... 198 Figure 6-24 Photograph of system setup. ............................................................... 200 Figure 6-25 System GUI........................................................................................ 200 Figure 6-26 Robot manipulation based on mouse and keyboard in EASY-ROBTM. 204 Figure 6-27 Haptic-aided virtual tele-operation for robot path planning. ............... 205 Figure 6-28 Diagram of the haptic-aided path planning system. ............................. 206 Figure 6-29 Schematic of model space and configuration space. ............................ 209 Figure 6-30 Schematic of PRM and SBL. .............................................................. 211 Figure 6-31 Screen snapshots of the path generated by SBL. ................................. 213 Figure 6-32 The semi-automatic path planning method.......................................... 216 Figure 6-33 Schematic of critical configuration selection in path modification....... 220 Figure 6-34 An ABB® robot for the haptic tele-operation system........................... 222

XVIII

LIST OF TABLES Table 2-1 Comparison of commercial desktop haptic devices. ................................. 26 Table 3-1 Cutting forces with cutting conditions. Courtesy of (Lin, Lee et al. 2001). 62 Table 3-2 The number of triangles for each object in the virtual environment. ......... 72 Table 3-3 Performances of PQP in scenario #1. ....................................................... 73 Table 3-4 Performances of PQP in scenario #2. ....................................................... 73 Table 3-5 Performances of PQP in scenario #3. ....................................................... 73 Table 4-1 Voxel resolution and its triangle number of a fighter model. .................. 130 Table 4-2 Comparisons of two hole-filling methods............................................... 132 Table 5-1 Computation time with different resolutions. ......................................... 151 Table 6-1 The D-H frame parameters of the virtual robot links. ............................. 168 Table 6-2 A user’s performance in virtual fracture reduction. ................................ 201 Table 6-3 Comparison of SBL and semi-automatic path planning without optimization (P1, P2).......................................................................................................... 218 Table 6-4 Comparison of SBL and semi-automatic path planning with optimization (P1). .............................................................................................................. 218

XIX

LIST OF ABBREVIATIONS

ANN API

Artificial Neural Network Application Programming Interface

BP

Back Propagation

B-Rep

Boundary Representation

BV

Bounding Volume

BVH

Bounding Volume Hierarchy

BVTT

Bounding Volume Test Tree

CAD

Computer Aided Design

CAID

Computer-Aided Industrial Design

CAM

Computer Aided Manufacturing

CAOS

Computer-Aided Orthopedic Surgery

CMM

Coordinate Measuring Machine

COM

Component Object Model

CPU

Central Processing Unit

CSG

Constructive Solid Geometry

CT

Computed Tomography

DAC

Digital to Analog Converter

D-H

Denavit-Hartenberg

DOF

Degree of Freedom

GPU

Graphics processing unit

GUI

Graphic User Interface

HD

Haptic Device

HIP

Haptic Interface Point

HL

Haptic Library

HVCMM

Haptic Virtual Coordinate Measuring Machine

HVTOS

Haptic Virtual Turning Operation System

HSVE

Haptic Shared Virtual Environment

Hz

Hertz

IK

Inverse Kinematics

LMD

Local Minimum Distance XX

LVC MRI

Local Virtual Coupling Magnetic Resonance Imaging

MSE

Mean Squared Error

NC

Numerical Control

NURBS

Non-Uniform Rational B-Spline

OBB

Oriented Bounding Box

ODE

Open Dynamics Engine

PC

Personal Computer

PRM

Probabilistic Roadmap

PQP

Proximity Query Package

RE

Reverse Engineering

RVC

Remote Virtual Coupling

RSS

Rectangle Swept Sphere

SBL

Single-query, Bi-directional, Lazy in collision checking

SCP

Surface Contact Point

SME

Small-to-Medium Enterprise

S-RLE

Spatial-Run Length Encoding

STL

Stereolithography

TCP

Tool Center Point

VM

Virtual Manufacturing

VPS

Voxelization and Point Sampling

VR

Virtual Reality

VTK

Visualization Toolkit

VTT

Virtual Technical Trainer

XXI

CHAPTER 1. INTRODUCTION

The rapid advancement of computing and virtual reality technologies has stimulated widespread interest in virtual manufacturing research. This thesis is intended to look into the emerging technologies in haptics and their potential applications in virtual manufacturing.

1.1 Basic concepts 1.1.1 Definition of Haptic, haptics, haptic interface, and haptic rendering Haptic (from the Greek haptesthai, meaning “contact” or “touch”) is the adjective used to describe something relating to or based on the sense of touch (Otaduy and Lin 2006). Haptics refers to the modality of touch and sensation of shape and texture an observer feels when exploring a virtual object, such as a 3D model of a tool, instrument, or art object (McLaughlin, Sukhatme et al. 2001). Haptic interface is defined as being concerned with the association of gesture to touch and kinesthesia to provide for communication between the humans and machines (Hayward, Astley et al. 2004). Haptic rendering is defined as the process of computing and generating forces in response to user interactions with virtual objects (Zilles and Salisbury 1995). Haptics is gaining widespread acceptance as a key part of Virtual Reality (VR) systems, adding the sense of touch to previously visual-only solutions. The use of haptics in virtual manufacturing may improve efficiency and user friendliness of existing Virtual Manufacturing (VM) systems. Some of the impossible tasks of previous VM systems can also be made realistic using haptics.

1

1.1.2 Definition of virtual manufacturing Manufacturing systems and processes are being combined with simulation technologies to reduce costs by optimizing key factors which directly affect the companies’ profitability. VM as one of these technologies is defined as “an integrated, synthetic manufacturing environment exercised to enhance all levels of decision and control” (Hitchcock, Baker et al. 1994; Lin, Minis et al. 1994). In this definition, several important terms are defined more clearly in the following: Environment: supports the construction, provides tools, models, equipment, methodologies and organizational principles; Exercising: constructing and executing specific manufacturing simulations using the environment which can be composed of real and simulated objects, activities and processes; Levels: from product concept to disposal, from factory equipment to the enterprise and beyond, from material transformation to knowledge transformation; Decision: understand the impact of change (visualize, organize, and identify alternatives).

1.1.3 The scope of VM VM aims at integrating a number of isolated manufacturing technologies such as Computer Aided Design (CAD), Computer Aided Manufacturing (CAM), and Computer Aided Process Planning (CAPP) using various VR technologies, thereby allowing multiple users to concurrently carry out these functions without the need for being physically close to each other (Shukla, Vazquez et al. 1996). There are three paradigms of VM (Lin, Minis et al. 1994): 2

Design-centered VM: provides manufacturing information to designers during the design phase. In this case VM is the use of manufacturing-based simulations to optimize the design of product and processes for a specific manufacturing goal, or the use of simulation processes to evaluate many production scenarios at many levels of fidelity and scope to inform design and production decisions. Production-centered VM: uses the simulation capability to design manufacturing processes with the purpose of allowing inexpensive, fast evaluation of many processing alternatives. From this point of view, VM optimizes manufacturing processes and adds analytical production simulation to other integration and analysis technologies to allow high confidence validation of new processes and paradigms. Control-centered VM: is the addition of simulation to control models and actual processes, allowing for seamless simulation for optimization during the actual production cycle.

1.1.4 The technology of VR in VM VR is a technology, which allows a human to interact with a virtual 3D environment and to understand its behavior by sensory feedback using real-time simulation (Sherman and Craig 2003). It holds great potential in manufacturing applications, e.g. product design, modeling, process simulation, manufacturing planning, training, and verification as shown in Figure 1-1. The application of VR helps to solve problems before a design or process is being employed in practical manufacturing thereby preventing costly mistakes (Mujber, Szecsi et al. 2004).

3

Figure 1-1 The technology of VR in VM.

VR may play a very significant role in designing a new product. It has been applied to two different applications in design: virtual design and virtual prototyping (Mujber, Szecsi et al. 2004). Virtual design provides a virtual environment for the designers in the conceptual design. Virtual prototyping is the process of using virtual prototypes instead of or in combination with physical prototypes, for innovating, testing and evaluating specific characteristics of a candidate design. VR is also a useful method to improve the understanding of the plans and to support interdisciplinary discussions. Virtual reality-based training is an effective method of teaching manufacturing skills and processes to employees. Using cuttingedge VR technologies, training takes place in a realistic, simulated version of the actual facility, complete with the actions, sights, and sounds of the plant floor. VR is increasingly applied to manufacturing processes such as machining, assembly, and inspection. Virtual machining mainly deals with cutting processes such 4

as turning, milling, drilling, and grinding, etc. It is used to study the factors affecting the quality, machining time of the material removal process as well as the relative motion between the tool and the work-piece. Virtual assembly is a key component of virtual manufacturing and is defined as: “the use of computer tools to make or ‘assist with’ assembly-related engineering decisions through analysis, predictive models, visualization, and presentation of data without realization of the product or support processes” (Mujber, Szecsi et al. 2004). In assembly work, VR is mainly used to investigate the assembly processes, the mechanical and physical characteristics of the equipment and tooling, the interrelation among different parts and factors affecting the quality based on modeling and simulation (Jayaram, Connacher et al. 1997). Virtual inspection is another example of applying VR to manufacturing, which makes use of the VM technology to model and simulate the inspection process. It aims at studying the inspection methodologies, collision detection, inspection plan, factors affecting the accuracy of the inspection process, etc.

1.2 Motivation Because VM has a wide range of applications, it is impossible to investigate every aspect of it in detail. In this thesis, several typical and important applications of VM using the VR technologies are proposed as shown in Figure 1-2. It should be noted that, in various VR technologies, we mainly focus on haptic techniques as such techniques are still hot research topics. The motivation of this thesis is now explained in the following sections.

5

Figure 1-2 The haptic-aided VM.

1.2.1 Haptic technology for product design in reverse engineering (RE) The research on integrating haptic techniques into conceptual design using CAD systems has been extensively studied, e.g. (Goertz and Thompson 1954; Zilles and Salisbury 1995; Frank Dachille, Qin et al. 1999). A typical commercial product for haptic shape design is FreeFormTM from SensAble Technology [Technologies, S]. Nevertheless, applying haptics into reverse engineering is scarcely reported in literature. The pioneering work is conducted by Yang Z.Y. and Chen Y.H. (Yang and Chen 2005). In their research, a physical model is digitized by chipping away virtual clay using a haptic device. One of the drawbacks is that the size of the physical object is limited by the workspace of the haptic device. On the other hand, the process might take a longer time compared with optical methods used in data acquisition of Reverse Engineering (RE). The models output from RE using optical methods usually have some geometric deficiencies, such as holes, and gaps. Using traditional computer interfaces, e.g. 2D mouse and keyboard, to fill holes of the models is a burdensome 6

job. Therefore, many automatic hole-filling methods were proposed (Davis, Marschner et al. 2002; Wang, Wang et al. 2002; Jun 2005). Due to the complexity of regions where holes are generated, the automatic hole-filling methods normally do not generate satisfactory results in patching the holes. It is also difficult for automatic methods to recover geometric features of models when most information of these features is missing. However, in the RE process, a user can easily find these geometric features because there is a real object being reconstructed which can be referred to. When handling with complex situations, a user must take initiative and intuition in the hole-filling process in order to obtain satisfactory models. In the user-guided hole-filling process, a user needs to frequently interact with the computer model. However, 3D interactive editing and modeling remains a problem of current CAD systems, since the screen and the desktop are only 2D. It is difficult to handle 3D points, lines, polygons and etc. with the traditional CAD interfaces. What’s more, a user often finds it inherently difficult to understand and perform tasks in 3D virtual environment, though we do live and act in a 3D world (Herndon, van Dam et al. 1994). Fortunately, haptics provides users a hand-based mechanism for intuitive interactions with virtual environments towards realistic tactile exploration and manipulation. Using force-feedback controls, designers, artists, as well as non-expert users can feel the model representation and modify the object directly as if in real settings, thus enhancing the understanding of object properties and the overall design. Therefore, haptic techniques are investigated for hole-filling in the 3D model reconstruction process.

7

1.2.2 Haptic-based virtual machining When a three-dimensional model is created, manufacturing operations may follow. The commonly used manufacturing operations are turning, milling, drilling, and grinding. Theses operations are also commonly practiced by surgeons using hand-held tools. Because these operations are labor-intensive and require high skills (Balijepalli and Kesavadas 2003), the training process normally takes a long time for a qualified machinist, dentist, orthopedist; and the costs of training are very high. Hence, VR technologies are exploited to train users in order to facilitate the training process and reduce its costs. Haptic feedback can recreate the realistic sensations of manipulating tools in the training of machining processes (Crison, Lecuyer et al. 2005). In the training of machining metal parts, the trainees are required to gain the knowledge on how to select proper materials, how to calculate cutting speeds and feed rates for various materials, and how to set up the machining sequence. In the training of drilling or milling bone in medical operations, the trainees are required to obtain the skills by which they can remove tumor of temporal bone or open a hole in the bone without damaging surrounding vital tissues. It should be noted that various simulators for training machining based on current VR technologies are far from mature, and they are usually used to help trainees to understand the basic principles of machining at the first stages of training courses. Though virtual machining has been studied over the years, previous research was mainly focused on studying the factors affecting the quality, machining time of the material removal process as well as the relative motion between the tool and the work-piece (Zorriassatine, Wykes et al. 2003; Mujber, Szecsi et al. 2004). In this

8

thesis, we focus on force modeling of haptic-based VM systems such as turning, grinding and drilling simulation.

1.2.3 Haptic-based virtual tele-operation Off-line teaching has been increasingly adopted in industry for robot programming due to the availability of more powerful hardware, computing methods and high-quality CAD models. In this mode, an operator manipulates a virtual robot to a sequence of end-effector positions in a computer-simulated environment. Then the simulated robot controller, identical to the real one, interpolates these positions and drives the robot to move along the path. At the same time, collision between the robot and the environment is detected automatically. According to the collision information the path can be modified manually or semi-automatically to produce a collision-free trajectory. The biggest advantage of off-line programming is that it does not occupy production equipment, thereby greatly reducing the costs. However, in traditional offline robot programming, manipulation of a virtual robot using a keyboard and mouse is a non-trivial job for an operator. These two modes are both unwieldy. Therefore, in this thesis a virtual tele-operation method based on haptic techniques is proposed in order to provide offline robot programming with a convenient tool for intuitively manipulating a virtual robot arm. At the same time, forces and torques generated by a haptic device can be beneficial to the operator to better settle robot configurations in a complex virtual environment. Based on this method, a haptic-aided virtual robotic assembly system, and a haptic aided robotic path planning system are developed as will be explained in Chapter 7.

9

1.2.4 Haptic-aided robotic path planning Robotic path planning has been extensively studied for several decades. It plays a key role in building autonomous or semi-autonomous systems.

Though many

solutions to automatic path planning have been proposed, very few planners have been applied in industry for robots with many Degree Of Freedoms (DOFs), due to the high complexity and dimensionality of the configuration space (C-space). A haptic-aided path planning system which combines manual and automatic path planning methods is proposed in the thesis. Besides the reason why we introduce haptic techniques into offline robot programming as has been mentioned in previous section, another one is the fact that it is difficult for an automatic path planning method, e.g. probabilistic roadmap (PRM) to generate an optimized path, and sometimes it even fails to find a collision-free path, due to failure of discovering critical robot configurations. However, these critical configurations might be plain to an operator. Therefore, the advantage of human’s intuition is exploited to facilitate the path planning process. The objective of this work is therefore to combine the advantages of humans’ instinct and powerful computation of computers to develop a semi-automatic path planner that can generate a user-preferred path and improve planning efficiency using a haptically-controlled virtual robot.

1.3 Research issues In the course of the research on developing haptic-aided VM systems, several important research issues arise. We try to propose methods to solve these problems, whilst some of them need to be further studied. These issues are described in the following subsections. 10

1.3.1 Haptic rendering methods for VM In order to achieve efficient haptic rendering, different force models are proposed for different VM operations. In haptic-aided hole-filling process, forces could assist users in stitching, sculpting operations and 3D point or facet selection. The fidelity requirement is not stringent. However, in the simulation of turning and drilling for training purpose, high fidelity haptic rendering becomes a critical issue. How to realistically model the forces of turning and drilling operation is still an open research topic because it involves complicated physics, such as fracture mechanics, and thermodynamics. What’s more, haptic rendering requires update rate as high as 1000 Hz, meaning that efficient computation methods are inevitable in the simulation. In the virtual tele-operation, the forces and torques are used to facilitate operators to manipulate virtual robot arms. To calculate these collision forces and torques in a complex virtual environment, efficient collision detection and response methods must be utilized. Therefore, different strategies must be adopted to develop haptic rendering methods for various applications of VM.

1.3.2 6-degree-of-Freedom (6-DOF) haptic rendering 6-DOF haptic rendering is another challenging issue, which arises from objectobject interaction in virtual environments such as virtual assembly/disassembly in virtual prototyping, and tele-operation. Through a 6-DOF haptic device (e.g. Phantom® Premium 1.5/6-DOF) a user can feel torques in addition to forces. The synthesis of force and torque is necessary in a behavior-rich virtual environment, from which many applications involving dexterous manipulation of virtual objects can benefit. 11

Early haptic rendering algorithms were mainly developed for 3-DOF haptic interfaces, which are based on the point-object interaction. These methods cannot be directly applied to 6-DOF haptic rendering due to the complexity of collision detection and response. Though several methods of 6-DOF haptic rendering are reported in literature (McNeely, Puterbaugh et al. 1999; Gregory, Mascarenhas et al. 2000; Nelson, Johnson et al. 2005; Otaduy and Lin 2005), they are usually tested in simple benchmarks. In fact, in the simulation of virtual assembly and planning in a virtual environment, it is common that a user manipulates a tool to probe a large number of dynamic and complex objects. Hence, developing an efficient 6-DOF haptic rendering algorithm for VM requires much more efforts from researchers.

1.3.3 Dynamic object modeling The shape of a part can be modified by users in real time through dragging points, or turning and drilling operations. In hole-filling process, this can be accomplished easily by allowing the manipulated geometric features to follow the positions of a haptic interface controlled by a user. However, to a triangular meshbased model, this method seems to be inefficient because a user needs to modify every point to form a desired shape and the resulting shape smoothness cannot be guaranteed. In turning simulation, because the cutting tool is assumed to be a singlepoint tool and it is an orthogonal cutting process, revolution models can be dynamically generated easily. However, in drilling simulation, a more general situation is taken into account, in which a user can freely move the drilling tool, resulting the generation of irregular surfaces. How to display these dynamical and irregular surfaces in real time is a challenging issue.

12

1.3.4 Virtual tele-operation mechanism for robotic off-line programming To provide offline robot programming with a convenient and intuitive tool, a virtual tele-operation mechanism based on haptic techniques, physical robot modeling and robot manipulation is proposed. In developing algorithms for this mechanism, 6DOF haptic rendering, 3D robot modeling, real-time collision detection and response, real-time calculation of robot forward kinematics and inverse kinematics, and motion mapping from haptic space to robotic space are investigated. The problem is that these algorithms must meet the requirement of high refresh rate of at least 30 Hz. Therefore, the efficiency of the algorithms must be considered.

1.4 Objectives In the traditional VM systems, forces are involved in the whole process of VM such as from virtual machining to virtual assembly, but they are not directly presented to a user. As a result, the user lacks perceptional understanding of the VM processes. Therefore, one of the thesis objectives is to propose effective haptic rendering algorithms that can be used for interaction between a VM system and a user. In VM processes such as engineering design, virtual machining, planning, and assembly, the sense of contact provides extra information about an object or process apart from the normal visual information. In the design stage, to help a designer to capture points, triangles and deform triangular meshes, constraint forces are needed to be modeled. In virtual machining stage, cutting forces including turning forces, grinding forces, and drilling forces need to be modeled, which require haptic rendering with high fidelity. Hence, how to model these forces with high fidelity becomes an important task in this thesis. In planning and assembly stage, the collision forces can facilitate the correction 13

of a wrong assembly plan and the constraint forces can assist the user in performing tasks. Because in these processes object-object interactions are involved, stable 6DOF haptic rendering methods need to be proposed. In haptic collaboration over network, two or more participants collaborate to complete a specific task. Therefore, a stable 6-DOF haptic rendering method for networked collaboration is needed. In traditional VM systems, a user usually interacts with a VM system through a keyboard, mouse and screen monitor. For example, in offline robot programming, the traditional interfaces are not intuitive and convenient for the 3D interactions. Therefore, the second objective of the thesis is to develop intuitive interfaces of VM based on a haptic device in order to improve efficiency and user friendliness of existing VM systems. In this interface, a 6-DOF haptic device is utilized as a control tool for intuitive manipulation of a virtual robot arm for offline robot path planning and programming. The problem is how to manipulate a virtual robot through a 6-DOF haptic device as if he/she is manipulating a real robot. To achieve this, physicallybased simulation and efficient 6-DOF haptic rendering methods are needed, which can provide useful tools for enhancing the simulation realism and improving users’ performance. Another problem from this objective is how to build a physical robot, meaning that the virtual robot can interact with not only an operator but also its surrounding virtual environments.

1.5 Thesis outline The rest of the thesis is organized as follows (Figure 1-3): Chapter 2 will present a comprehensive review of haptics and virtual manufacturing. We mainly focus on applications of haptic techniques in virtual manufacturing. 14

In Chapter 3, several algorithms on haptic rendering in virtual manufacturing will be explained in detail. These algorithms are categorized into three classes: pointbased, surface-based, and volume -based methods. In Chapter 4, we propose two methodologies for haptic-aided hole filling in RE, triangular mesh based and volumetric mesh based methods. For the first one, we present it in detail, including its algorithm, implementation, and examples. For the second one, we mainly show how to use it for hole filling by giving several examples. Comparisons between them are also explained at the last part of the chapter. In Chapter 5, we try to simulate machining processes, such as turning, grinding, and drilling. In turning and grinding simulation, we assume they are orthogonal cutting processes. In contrast to this, the drilling is assumed as a process where the drilling tool is freely controlled by an operator. Therefore, we use different approaches for dynamic part simulation and force modeling as will be explained in the chapter. In Chapter 6, we will present a methodology on haptic-based virtual teleoperation. Methods of physical robot modeling and robot manipulation will be explained. Based on this haptic-based virtual tele-operation mechanism, we will present its applications to haptic-aided virtual robotic path following, assembly, and path planning. As a special case of virtual assembly, a haptic-aided robot-assisted virtual bone fracture reduction system will also be presented in detail in the chapter. Finally, in Chapter 7 we will draw some conclusions and discuss the future work.

15

Figure 1-3 Thesis organization.

16

CHAPTER 2. LITERATURE REVIEW

The field of haptics is multidisciplinary and it borrows many techniques from various areas, including tele-operation, robotics, experimental psychology, computer science, systems and control (Hayward, Astley et al. 2004) as shown in Figure 2-1. Haptics inherits much from tele-operation. The early tele-operator systems had mechanical linkages between the master and the slave. For example, Goertz and Thompson (Goertz and Thompson 1954) in 1954 developed an electrical servomechanism that received feedback signals from sensors mounted on the slave and applied forces to the master to generate haptic feedback. Haptics distinguishes itself from tele-operation by substituting the slave robot with a simulated system, where forces were computed using physically-based simulation.

Figure 2-1 Haptics and its related disciplines.

17

A typical human-haptic interaction system is composed of two subsystems, human sensorimotor system and a haptic interface (Basdogan, Ho et al. 1997) as shown in Figure 2-2. In the first subsystem, when a human user touches a virtual object in the computer, forces are feedback from the haptic interface. The associated sensory information is conveyed to the brain, leading to perception of touch. Then the brain activates the muscles to drive hand to move. In the second one, when the human user manipulates the end-effector of the haptic interface device, the device sensors capture its tip position and convey it to the computer, which calculates in real-time the torques for actuators of the haptic interface. Then, the reaction forces are applied to the user by the actuators, thereby making the human user to haptically perceive the virtual objects.

Figure 2-2 General structure of human-haptic interaction.

18

Technological advancements in haptics have unfolded its applications in many fields, including medicine, entertainment, education, industry, and graphic arts as shown in Figure 2-3 (Basdogan, Ho et al. 1997). In this thesis, we are mainly concerned with haptic applications in industry, which are further categorized into the following application areas: product design, analysis, assembly and path planning, machining simulation, and tele-robotic control.

Figure 2-3 Haptic applications.

In the sequel of this chapter, we will review various types of haptic devices, haptic rendering methods, and haptic applications in the first section. Then, we will give some detailed examples on haptic applications in virtual manufacturing.

2.1 Haptic interfaces 2.1.1 Major functions and features Haptic interfaces are mechanical devices composed of sensors and motors or other actuators, allowing users to experience kinesthetic (force) and/or tactile (touch) sensations for the purpose of exchanging information with the human sensory system. 19

Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. However, these interfaces do not convey the touch and feel of objects. Haptic modality exchanges information and energy in two directions, from and toward the user, which is often referred to as the most important feature of the haptic interaction modality (Hayward, Astley et al. 2004). In performing tasks with a haptic interface, a user physically manipulates the interface, which, in turn, conveys tactual sensory information to the user by stimulating his or her tactile and kinesthetic sensory systems. Characteristics of haptic interface devices include (Zilles and Salisbury 1995; Basdogan, Ho et al. 1997): •

symmetric and low back-drive inertia, friction, and minimal constraints on motion so that users can freely manipulate the device;



balanced range, resolution, and bandwidth of position sensing and force reflection so that users can feel proper response force;



proper ergonomics that lets users comfortably perform tasks without disturbance or pain.

2.1.2 Types of haptic interfaces Haptic interface devices can be classified into two types by their intrinsic mechanical behavior, impedance type, and admittance type (Zilles and Salisbury 1995). An impedance-type haptic device captures position and sends force, whereas, an admittance haptic device senses force and sends position. Because it is simpler to design and cheaper to produce an impedance-type haptic device, it is commonly used in academic and industrial applications. Admittance-based devices such as

20

HapticMaster (Van der Linde, Lammertse et al. 2002) are generally used for applications requiring high stiffness, large forces, and high force sensitivity in a large workspace. Another way to categorize haptic devices is to distinguish them by their grounding locations. According to this taxonomy, haptic devices can be categorized into three kinds, tactile devices, ground-based devices, and exoskeleton devices. The human-haptic sense is composed of two modalities: the kinesthetic sense (force, motion) and the tactile sense (tact, touch). A tactile device is a man-machine interface that is aimed at truly reproducing the tactile parameters, such as the shape, texture, and roughness (Benali-Khoudja, Hafez et al. 2004). Many tactile devices have been developed from various fields, such as tele-operation, tele-presence, 3D surface generation, laboratory prototypes studying tactile parameters, games, and etc (BenaliKhoudja, Hafez et al. 2004). In this thesis, we mainly list commercially available haptic devices relevant to human’s kinesthetic sense. For more details on tactile interfaces, the reference (Benali-Khoudja, Hafez et al. 2004) is recommended. Ground-based devices are fixed on ground, including force-reflecting joysticks and desktop haptic interfaces. A joystick is a personal computer peripheral or control device consisting of a hand-held stick that pivots about one end and transmits its angle in two or three dimensions. A joystick usually has one or more push-buttons whose state can be read by the computer. For example, ForceTM 3D Pro is a force-feedback joystick device from Logitech® as shown in Figure 2-4. Apart from being applied to games, this device has also been adapted for the rehabilitation of patients with brain injuries (Reinkensmeyer, Painter et al. 2000).

21

Figure 2-4 ForceTM 3D Pro (Photograph courtesy of Logitech®).

So far, the most successful commercial desktop haptic devices are the PHANTOM® series produced by SensAble Technologies (Technologies 2004). There are four types: PHANTOM® OMNITM, PHANTOM® DesktopTM, PHANTOM® PremiumTM, and PHANTOM® PremiumTM 6DOF as shown in Figure 2-5. Force Dimension® provides two types of haptic devices (Dimension): OmegaTM and DeltaTM as shown in Figure 2-6. These haptic devices are designed using parallel mechanism and active gravity compensation which results in a high stiffness and better stability compared to PHANTOM® products that are based on serial mechanism. Haption® (Haption) provides various haptic devices. VirtuoseTM 3D15-25 is a 3D haptic interface as shown in Figure 2-7(a). VirtuoseTM 6D35-45 offers forcefeedback on all 6 DOFs together with a large workspace as shown in Figure 2-7(b).

22

(a) PHANTOM® OMNITM.

(c) PHANTOM® PremiumTM.

(b) PHANTOM® DesktopTM.

(d) PHANTOM® Premium 6DOFTM.

Figure 2-5 PHANTOM®. (Photograph courtesy of SensAble Technologies®).

23

(a) OmegaTM.

(b) DeltaTM.

Figure 2-6 Haptic devices of Force Dimension® (Photograph courtesy of Force Dimension®).

(a) VirtuoseTM 3D15-25.

(b) VirtuoseTM 6D35-45.

Figure 2-7 Haptic devices of Haption® (Photograph courtesy of Haption®).

Quanser® (Quanser) has advanced a redundant actuator robot into a robust commercial tool, 5-DOF Haptic Wand System as shown in Figure 2-8. The haptic 24

interface has 5 DOFs allowing for three translations and two rotations (roll and pitch). This is achieved by using a dual-pantograph arrangement. Cubic3TM(Figure 2-9(a)) is a 3-DOF haptic device from MPB® Technologies, which is based on parallel mechanism. Its 6-DOF haptic device, Freedom 6sTM (Figure 2-9(b)) is based on a serial mechanism.

Figure 2-8 5-DOF Haptic System of Quanser® (Photograph courtesy of Quanser®).

(a) Cubic3TM.

(b) Freedom 6STM.

Figure 2-9 Haptic devices of MPB® Technologies (Photograph courtesy of MPB® Technologies).

25

FalconTM (Novint) is a 3-DOF haptic device from Novint® as shown in Figure 2-10 targeting at low-end haptic device market.

Figure 2-10 Novint® FalconTM (Photograph courtesy of Novint®).

A comparison of the above commercial desktop haptic devices is given in Table 2-1.

Table 2-1 Comparison of commercial desktop haptic devices. 26

For body-based exoskeleton haptic devices, a user needs to wear them on the arm or leg. Immersion® (Immersion) produces some hand-wear haptic devices, such as CyberForce®, and CyberGrasp™ as shown in Figure 2-11. The CyberForce® system is a force feedback armature that not only conveys forces to the hand and arm but also provides 6-DOF positional sensing. The CyberGrasp™ device is a lightweight, forcereflecting exoskeleton that adds resistive force feedback to each finger, allowing users to feel the size and shape of virtual 3D objects in a virtual environment.

(a)CyberForce®.

(b) CyberGrasp™.

Figure 2-11 Exoskeleton haptic devices of Immersion® (Photograph courtesy of Immersion®).

2.2 Haptic rendering methods The principle of haptic rendering seems to be very simple: in every millisecond, the computer reads joint encoders of a haptic device and calculates the position of its stylus. Then, it checks this position to those of the virtual objects that is being touched by a user. If there is a collision, the computer will calculate forces and torques 27

according to some algorithms and send them to the haptic device, which will drive its motors. As a result, the user can touch the virtual objects through the stylus of the haptic device. If there is no collision, no forces will be generated, and the user can freely move the virtual tool in the virtual environment. Haptic rendering requires high update rate (1 kHz). It will be adversely affected by slow update rate which is attributed to high costs of computation, or delays induced by network congestion and bandwidth limitations in distributed applications. Therefore, there are some challenging issues of high fidelity haptic rendering such as latency, stableness, and time-critical collision detection (McLaughlin, Sukhatme et al. 2001). If a user penetrates a virtual object with a probe, and there is no immediate force feedback sent to the user, lag or latency happens. Mark et al. (Mark, Randolph et al. 1996) proposed methods to overcome latency by improving haptic update rate. They used intermediate representation of force through a “plane and probe” method. The position of the plane was updated at 20 kHz, and the force at 1 kHz. Adding force feedback into multi-user environments demands low latency. However, significant and unpredictable delays are common in network communication, inducing instability of haptic rendering. Buttolo et al. (Buttolo, Oboe et al. 1997) proposed a “one-userat-a-time” architecture for shared haptic virtual environment. A server was used to coordinate the users’ actions by allowing only one can modify the object at a time, thereby tolerating some latency. The haptic rendering was done on a local copy of the virtual environment at each client’s station. Stability is crucial to haptic rendering because instability in a haptic system can produce oscillations that distort the perception of the virtual environment or even hurt the user. High fidelity haptic rendering generally requires high force-feedback gains 28

that will result in self-induced oscillation even instability. Early analysis of haptic rendering stability focused on the problem of rendering stiff virtual walls. Colgate and Schenkel (Colgate, Stanley et al. 1994) formulated passivity conditions in haptic rendering of a virtual wall which was modeled as a visco-elastic unilateral constraint. They derived a sufficient condition for stability of haptic rendering:

b>

KT +B 2

2-1

where b is the inherent damping of the device, K is the stiffness, B is the damping factor, and T is the sampling period. Later, (Colgate, Stanley et al. 1994) they extended their work into a more general virtual environment. In that environment, they set a multidimensional visco-elastic “virtual coupling” between the virtual environment and the haptic interface. Then, the stability of the system is guaranteed as long as the virtual coupling is passive. Collision detection is usually tightly integrated into haptic rendering process. Therefore, efficient real-time collision detection methods are needed to avoid computational latency. Typical research work on this issue will be introduced in the following section. After that, haptic rendering algorithms review is grouped into two classes, 3-DOF haptic rendering and 6-DOF haptic rendering.

2.2.1 Time-critical collision detection for haptic rendering Collision detection is extensively studied in computer graphics. Here, only those related to haptic rendering are discussed. Johnson et al. (Nelson, Johnson et al. 2005) proposed the local minimum distances (LMDs) for 6-DOF haptic rendering of complex polygonal objects as shown in Figure 2-12. Spatialized normal cone hierarchies were used to encapsulate the 29

position and spread of surface normals over a model. It utilized the surface normal information to find portions of each model that pointed towards each other, which represented a local minimum distance. Based on this algorithm, repulsive forces and torques could be calculated to maintain collision-free status between a controlled virtual tool and virtual objects.

Figure 2-12 Local minimum distances.

McNeely et al. (McNeely, Puterbaugh et al. 1999) proposed a combination of voxelization and point-sampling (VPS) method for solving the problem of collision detection. They created a voxel-based scene and a point-sampled tool to explore the virtual environment in order to accelerate collision detection as shown in Figure 2-13. A generalization of octree was used to improve voxel memory efficiency. It was reported that this algorithm could be directly used to a haptic loop which reliably sustained a 1 kHz haptic refresh rate. The algorithm was designed for 6-DOF haptic rendering for manipulation of a modestly complex rigid object within an arbitrarily complex environment of static rigid objects.

30

Figure 2-13 Voxmap colliding with point shell (Courtesy of McNeely et al.).

Gregory et al. (Gregory, Mascarenhas et al. 2000) proposed a collisiondetection framework, called H-COLLIDE for haptic interaction with polygonal models as shown in Figure 2-14. They pre-computed a hybrid hierarchical representation for a model, which consisted of uniform grids and trees of tight-fitting oriented bounding box trees (OBB-Trees). Frame-to-frame coherence was also exploited for fast proximity queries. They had shown through several experiments that their algorithms could be used to perform collision detection at rates faster than 1 kHz even for a model with 79 k triangles. Otaduy and Lin (Otaduy and Lin 2005) presented a sensation-preserving multiresolution collision detection method for complex polyhedral objects to ensure high update rate of haptic rendering. As can be seen from Figure 2-15, a lower resolution (shown in blue and green) is selected adaptively for each contact location, while the finest resolution is displayed in wire frame. In their approach, they constructed a multi-resolution hierarchy using “filtered edge collapse”, which smoothed away highfrequency detail in low-resolution approximations while respecting the convexity

31

constraints imposed by collision detection. They claimed that their algorithms could achieve up to two orders of magnitude performance improvement with little degradation in the haptic perception of contacts compared to existing collision detection algorithms.

Figure 2-14 Architecture of H-COLLIDE (Courtesy of Gregory et al.).

Figure 2-15 Sensation-preserving multi-resolution collision detection (Courtesy of Otaduy and Lin).

32

2.2.2 3-DOF haptic rendering 3-DOF haptic rendering methods can be categorized into two types, penaltybased methods, and constraint-based methods. Here, only the second type of methods is reviewed because they are more widely used, and the first type of methods suffers from visual distortion and haptic instability. Constraint-based 3-DOF haptic rendering methods calculate forces as a function of the distance between the probe controlled by the haptic interface and a contact point constrained to the surface of an object which is being haptically displayed. Zilles and Salisbury (Zilles and Salisbury 1995) proposed a method, called “god object” to calculate constraint points for 3-DOF haptic rendering as shown in Figure 2-16. Using Lagrange multipliers, they computed these constraint points based on the position of the haptic interface point (HIP) and a set of local constraints. They defined a cost function based on the distance between the probe and the contact point, added with penalty terms. This cost function was minimized to get the constraint point.

Figure 2-16 God object.

Another similar method, called “virtual proxy” was proposed by Ruspini et al. (Ruspini, Kolarov et al. 1997). They modeled the contact point as a sphere of small

33

radius instead of a point. First, they built configuration-space obstacles (C-obstacles) by computing an offset surface of an object at a distance equal to the radius of the sphere as shown in Figure 2-17. Then, they tried to find the virtual proxy using an iterative search method. At each search loop, the method tried to find a sub-goal based on the same distance minimization as for the god-object. Collisions between the path of the HIP and the C-obstacles were checked. If there was a collision, intersected plane was added as an active constraint and the intersection point was used as the current sub-goal. Then, it tried to find a new sub-goal using the active constraint planes and the minimization based on Lagrange multipliers. Otherwise, if there was no collision, meaning the sub-goal was in free space, it dropped the constraints, and set the HIP as the new sub-goal. The iterative process ended when the virtual proxy became stable.

Figure 2-17 Virtual proxy.

34

2.2.3 6-DOF haptic rendering 6-DOF haptic rendering methods can be classified into two categories based on their computational pipelines, direct rendering methods, simulation-based methods (Otaduy and Lin 2006). In a direct rendering method, the positions and orientations of the haptic interface are directly assigned to the manipulated virtual tool. Collision between the tool and its surrounding virtual environment is constantly checked. Forces and torques caused by collision are typically calculated based on separation or penetration depth using penalty-based methods. Gregory et al. (Gregory, Mascarenhas et al. 2000) proposed a 6-DOF haptic rendering method. They exploited a combination of incremental techniques, geometric locality, temporal coherence, and predictive methods in order to speed up the computation of object-object contacts. The resulting contact information was used for calculating the restoring forces and torques based on penetration depth, which was used to compute forces and torques based on penalty methods using Hooke’s law. The penetration depth was computed by extending a closest-feature algorithm. Johnson et al. (Nelson, Johnson et al. 2005) proposed the local minimum distances (LMDs) for 6-DOF haptic rendering of complex polygonal models. They computed the LMDs that were closer than the cutoff distance between the virtual tool that was controlled by the haptic interface and the rest of the models in the scene. Each LMD was considered a virtual spring with a rest length equal to the cutoff distance as shown in Figure 2-18. Then, repulsive forces and torques were calculated by summarizing these spring forces computed by a spring force model. The main advantage of direct rendering methods is that there is no need to simulate the rigid-body dynamics, which is computationally expensive. However, 35

penetration depth may be visually perceptible and system instability can arise (Otaduy and Lin 2006). In simulation-based 6-DOF haptic rendering methods, forces and torques are computed using rigid-body simulation. In the simulation, a user exerts forces on a virtual tool which is combined with collision response forces to produce resulting motion, which is, in return, used for haptic feedback.

Figure 2-18 6-DOF haptic rendering based on LMD. (Courtesy of Johnson et al.)

Colgate et al. (Colgate, Stanley et al. 1994) proposed a method of stable haptic rendering, called “virtual coupling”, which can be used in 6-DOF haptic applications. In their method, the haptic interface and a virtual tool were coupled with a viscoelastic model as shown in Figure 2-19. Based on this model, the coupling forces and torques were used for haptic feedback and rigid body simulation.

36

Figure 2-19 “Virtual coupling” for stable 6-DOF haptic rendering. (Courtesy of Colgate et al.)

McNeely et al. (McNeely, Puterbaugh et al. 1999) combined the “virtual coupling” concept and the constraint-based method for 6-DOF haptic rendering in virtual assembly and maintenance planning applications. In their method, they utilized pre-contact braking forces to reduce contact velocity of the virtual tool thereby preventing deep penetrations. They averaged the effects of the different contact points to limit the total stiffness and increase the system stability. Their method was integrated into a commercial product of Boeing®, VPSTM. There are two main advantages of constraint-based methods compared with direct rendering methods, higher system stability, and smaller object interpenetration. The main drawback of constraint-based method is that force filtering effects can be noticeably perceived by a user. In order to reduce the filtering effect, Otaduy and Lin (Otaduy and Lin 2005) proposed a method using implicit integration of rigid body simulation of the virtual tool.

37

2.3 Haptic-based virtual manufacturing Virtual manufacturing (VM) technology has helped manufacturers in improving productivity through reducing manufacturing cost and design-to-market cycle time. Many research studies on VM systems have been reported. Iwata et al. (Iwata, Onosato et al. 1995) proposed a general modeling and simulation architecture for a VM system.

Ebrahimi and Whalley (Ebrahimi and Whalley 1998) developed a

cutting force prediction model for simulating machining conditions in VM. Fang et al. (Fang, Luo et al. 1998) proposed a VM laboratory for knowledge learning and skills training. In this section, we mainly focus on reviewing haptic-based VM applications. Frank and Qin et al. (Frank Dachille, Qin et al. 1999) presented a haptic-aided manipulation of B-spline surfaces as shown in Figure 2-20. In their system, point, normal, and curvature constraints could be specified and modified naturally by means of a haptic interface. They formulated a dual representation for B-spline surfaces in both physical and mathematical space. The physical surface based on mass-spring model is mathematically constrained by the B-spline surface in the course of sculpting procedure. It was claimed that the integration of haptics with traditional geometric modeling would facilitate human-computer interaction and shorten the timeconsuming design cycle.

38

Figure 2-20 Haptic sculpting of dynamic surfaces. (Image courtesy of Frank Dachille, Qin et al. 1999)

McDonnell and Qin et al. (McDonnell, Qin et al. 2001) proposed an interactive sculpting system based on subdivision solids and physics-based modeling as shown in Figure 2-21. The physical attributes of the dynamic subdivision solid object could be assigned to both the boundary and the interior. The dynamic subdivision solid was formulated by unifying its geometry with the principle of physics-based model. They claimed that within their sculpting environment, the virtual clay could respond to forces in an intuitive way and the force feedback could significantly enhance the sense of realism.

39

Figure 2-21 A virtual clay system. (Photograph courtesy of McDonnell, Qin et al. 2001)

The research group led by Dr. YH Chen conducted a lot of work on haptic based VM. They proposed a new reverse engineering methodology based on haptic volume removing (Yang and Chen 2005). In their method, a physical object was first buried in a piece of virtual clay. The digitization of the physical object was then achieved by simply removing the virtual clay using a haptic device as shown in Figure 2-22. In (Chen and Yang 2007) they presented an integrated product development environment based on haptic modeling using spatial-run length encoding (S-RLE). In the virtual product design environment, digital product models could be virtually prototyped and analyzed by a user as shown in Figure 2-23.

40

Figure 2-22 Haptic volume-removing system for reverse engineering. (Photograph courtesy of Yang and Chen)

Figure 2-23 Hapitc-aided virtual prototyping system. (Photograph courtesy of Chen and Yang)

In (Chen, Wang et al. 2004), they presented a novel coordinate measuring machine (CMM) inspection path planning environment, called

haptic virtual

coordinate measuring machine (HVCMM). To generate the inspection path, teach programming was made by pointing a probe at the 3D CAD model of a part using a 41

haptic device as shown in Figure 2-24. It was reported that HVCMM was much easier to generate collision-free probe path than using other off-line inspection planning methods.

Figure 2-24 HVCMM. (Photograph courtesy of Chen, Wang et al.)

In (Yang and Chen 2005), they proposed a methodology of haptic-aided synthesized shape modeling and haptic function evaluation of product design for multi-material part. For quick design verification, they integrated function evaluation into shape design in the same haptic environment where a load could be applied to a part using a haptic device controlled by a user as shown in Figure 2-25. The reaction of the part in terms of deformation could be visually and haptically felt by the user. Zhu et al. (Zhuozhi, Shengyi et al. 1998) proposed a technique for 5-axis pencilcut planning. In their method, they utilized a 5-DOF haptic interface for tool orientation determination and tool collision avoidance based on a dexel modeling method. A two-phase rendering approach was proposed for haptic rendering.

42

Figure 2-25 Haptic-aided functional analysis system. (Photograph courtesy of Yang, Lian et al.)

2.4 Tactile-based virtual manufacturing In addition to haptic interfaces, tactile interfaces are also commonly used in virtual manufacturing, especially in simulation of assembly and disassembly. In a simulation environment, a user can analyze and validate the assembly and disassembly processes and sequences with tactile feedback. Kuehne et al. proposed a system called Inventor Virtual Assembly (IVY) (Kuehne and Oliver 1995) which allowed a user to import data from a CAD tool for analysis and validation of predefined assembly sequences. It provided multi-sensory feedback from a virtual environment including a head-mounted display and a

43

Cyberglove® that allowed the user to grab and move objects. IVY lacks collision detection between virtual objects, modeling of inertia and gravity, and force feedback. Pere et al. developed a virtual environment (Pere, Langrana et al. 1996) for validating mechanical assemblies using the Rutgers Master II glove, which provided force feedback to four fingertips. The major limitation of this environment is that force feedback is not provided to the hand as a whole. For instance, if a grasped object passes through another object, its position is simply reset to the previous position and the user’s hand is free to move. Antonishek, et al. (Antonishek, Egts et al. 1998) presented a virtual environment that used tracked gloves, shutter glasses, and two-handed gestures to interact with and verify product assemblies on a virtual workbench. Minimal collision detection was implemented using simplified bounding box intersection testing to determine if the user’s hand was touching objects or not. It was claimed that haptic feedback would add to the realism of an assembly simulation. Jayaram et al. (Jayaram, Jayaram et al. 1999) developed the Virtual Assembly Design Environment (VADE) (Figure 2-26) to allow engineers to evaluate, analyze, and plan the assembly of mechanical systems. VADE provided tactile feedback instead of kinesthetic force feedback and supported one-handed and two-handed operation. One of the two hands could be dexterous with a glove device, allowing a user to realistically grasp objects. The non-dexterous hand was used to grab and manipulate the base sub-assembly. Gomes de et al. proposed a constraint-based environment (Gomes de and Zachmann 1999) for accurately simulating assembly and maintenance. From a user survey of the virtual environment, it was reported that voice input was preferred over 3D menus and the keyboard as the method for giving commands to the simulation. 44

For the purposes of assembly, vibro-tactile feedback alone was considered an unnatural feeling. It was noted that force feedback would have been much more desirable, without which it was almost impossible to accomplish some assembly tasks.

Figure 2-26 VADE. (Courtesy of Jayaram, Jayaram et al. 1999)

In all haptic-based VM applications, haptic rendering plays a very important part. The next chapter is a description of common haptic rendering methods and the proposed methods for this research.

45

CHAPTER 3. HAPTIC RENDERING METHODS

In virtual manufacturing, forces are involved in its whole process including design, virtual machining, planning, and assembly. In design stage, forces can be exploited to assist designers in product design. A typical example is the commercial 3D modeling product of SensAble Technology®, FreeFormTM (Technologies 2004). In our research, forces are utilized to help designers to capture points, triangles and deform triangular meshes. In virtual machining stage, modeling of cutting forces plays an important role in studies of machining error and tool life. In this thesis, cutting forces including turning forces, grinding forces, and drilling forces are modeled.

In planning and assembly stage, forces caused by collision between

components can facilitate the performance of the job and evaluation of the assembly plan. Based on the interaction types between a tool and a part, contact force modeling methods are categorized into the following classes: point-based, surface-based and volume-based haptic rendering methods. In design stage, the tool is assumed as a point which is manipulated by a designer. The tool can be used to touch a mesh and capture a point and pull or push it towards a desired position. It can also be applied to mesh deformation which is accomplished by manipulating control points. Hence, the point-based haptic rendering method is mainly used in design stage.

In virtual

machining stage, different haptic rendering methods are proposed for modeling of turning and drilling forces. In turning simulation, the process is assumed as being single-point tool operating. Therefore, the cutting forces can be modeled by the pointbased haptic rendering method. In drilling simulation, the hand-held drilling tool is modeled by a sphere and the part which is being drilled is modeled by a volume-based 46

method. In view of the interactions between a sphere and a volumetric model, the drilling forces are mainly modeled by a volume-based haptic rendering method. In planning and assembly stage, the haptic interface is always utilized to manipulate a virtual object or a robot arm to interact with the virtual environments. This process always involves object-object interactions. Therefore, surface-based haptic rendering method is mainly adopted in this stage.

3.1 Point-based haptic rendering 3.1.1 Constraints-based methods 3.1.1.1 Proxy-based constraints Due to lack of visual depth cues when viewing a 3D scene on a 2D display, it is a challenging job for a user to quickly locate an object in the 3D scene even by the aid of a 6-DOF haptic interface. To overcome this limitation, a snap constraint method borrowed from the traditional 2D-mouse ray picking approach is extended by means of haptic techniques (Technologies 2004). In that approach, the haptic device can be snapped to a 3D object (point, line and surface) in the direction from the haptic interface position (HIP) to that object. Instead of accurately pinpointing a 3D object, the user can approximately put the HIP around that object, and if their distance is less than a snap distance, then the proxy of the haptic interface will be quickly stuck to that object by forces which are proportional to the distance between the HIP and the proxy. Once their distance is beyond the snap distance, the proxy will be freed and those constraints will be inactive. Hence, the snap constraint method is very effective especially when attempting to select points in 3D scene with little visual depth cues.

47

A simple example of a snap constraint is a point constraint as shown in Figure 3-1. When a user manipulates a haptic interface towards a constraint point and the HIP is out of the snap field as depicted by a circle as shown in Figure 3-1(a), its position is the same as that of the proxy. Once the HIP is within the snap distance (ls), the proxy is confined to the position of the point as shown in Figure 3-1(b).

Figure 3-1 The principle of point constraint.

The snap forces are calculated according to the distance between the proxy and HIP as shown in Figure 3-2(a). As a result, at the moment just when the user moves the HIP passing through the snap field, sudden forces will be applied to the user through the haptic interface. This quick change of force magnitude will possibly cause damage if the snap distance and force stiffness are specified improperly. On the other hand, the force direction will also change rapidly when a user selects a constraint point among many points with overlapped constraints. For example as shown in Figure 3-3, there are two points, P1 and P2, and their constraint fields are overlapped. P1 is the point that the user wants to select. During the selection procedure, firstly the proxy is stuck at the position of P2 as shown in Figure 3-3(a). Then, the user must overcome the snap force (F) to move the HIP towards P1. If the HIP falls into the

48

field of P1, then the proxy will be suddenly transferred from P2 to P1, causing the quick change of snap force direction as shown in Figure 3-3(b).

Figure 3-2 Constraint force models.

Figure 3-3 An example of multiple constraints.

49

Similar constraint method can be applied to 3D lines and triangles as shown in Figure 3-4. If the HIP falls into the constraint field, then the proxy is projected onto the line or triangle. As a result, the proxy is “stuck” onto the line or triangle and the user can feel the snap forces which are proportional to the distance between the HIP and proxy.

Figure 3-4 Line and triangle constraints.

3.1.1.2 Force-based constraints One solution to these problems of the proxy-based method is proposed in this thesis by replacing discontinuous force function with continuous force functions as shown in Figure 3-2(b), (c) and (d). In Figure 3-2(b), the shape of force function is an isosceles triangle instead of a right-angled triangle as shown in Figure 3-2(a). Quadric 50

function and Gaussian function are used for calculating snap forces as shown in Figure 3-2(c) and (d) respectively. Because these two force models can make the snap force to change smoothly, they are suggested to be used in the 3D object selection based on a haptic interface. These force functions as depicted by the following equations can avoid “force leap” when the HIP transits at the borderline of the snap field.

k ⋅ x, 0 < x < 0.5ls F ( x) = − k ⋅ ( x − ls ), 0.5ls < x < ls

3-1

0, else F ( x) =

−k ⋅ x( x − ls ), 0 < x < ls 0, else

F ( x ) = k ⋅ ls ⋅ e



3( x − 0.5ls ) 2 ls

3-2

, 0 < x < ls

3-3

0, else where k is the stiffness, ls is the snap distance, and x is the distance between the HIP and the position of the object being selected.

3.1.1.3 Multiple Constraints For multiple constraints, there are three scenarios in proxy-based methods. In the first scenario, multiple constraints are applied and the constraints do not overlap. In this case, the constrained proxy is constrained to the object that is closest to the current proxy. Once the proxy is constrained to that object, it will not become constrained to any other objects until the distance between the object and the HIP becomes greater than the snap distance. In the second scenario, the constraints overlap and they are of different dimensionality. The proxy will be first constrained to the object of higher dimension.

51

Then, if the proxy is within snap distance of the lower dimension constraint, it will be further constrained to that constraint. For example, the user can move freely along a line with constraints and then be further snapped to a point. This approach can provide an efficient way to locate a point in 3D environment. In the third scenario, the overlapping constraints are of the same dimensionality. The proxy will be constrained to whichever object is closest to the current proxy. If two constraints are of equal distance to the proxy, the proxy will be constrained to whichever is closest to the HIP, allowing a user to move from one constraint to another. Therefore, composite constraint can be applied, which makes a user feel continuous such as surfaces made up of connected triangles. In the force-based methods, the multiple constraints become simpler and easier. For example as shown in Figure 3-5, the constraint forces, F1 and F2, are calculated according to the distance between HIP and constraint points, P1 and P2. Then, the force vectors are added to generate a constraint force, F.

Figure 3-5 Multiple constraints of force-based method.

52

3.1.2 Spring-damper model The proxy (also known as the “god-object”) is a point which closely follows the HIP. If The HIP does not penetrate the surface as shown in Figure 3-6(a), the HIP and the proxy will be superposed. If it penetrates the surface as shown in Figure 3-6(b) and (c), the position of the proxy is constrained to the outside of the surfaces of all touchable triangles. The force sent to the haptic device is calculated by stretching a virtual spring-damper:

F = k ( P p − Ph ) − ζ V

3-4

where k is the stiffness constant, Ph is the HIP, Pp is the position of proxy, ζ is the damping constant and V is the velocity of end-effector.

Figure 3-6 Force model for touchable surfaces.

3.1.3 Cutting force model for single-point tools The lathe cutting operation can be characterized by a single-point tool. Figure 3-7 shows an example of a typical single-point tool and its force model (Gorczyca 1987), where Fc is the cutting force, Ff is the feed force and Fr is the radial force.

53

Figure 3-7 Cutting tool model and forces acting on cutting tool.

The radial force Fr is calculated according to spring-damper model as has been mentioned before. Kosilova et al. proposed a cutting force model (Kosilova and al. 1985):

Fc = KC p d xp f yp v np

3-5

where K is a coefficient dependent on the machined material, cutting angles and other tool parameters; CP is a coefficient dependent on the machined material; d is the cutting depth (mm) and f the cutting feed rate(mm/rev), v the cutting speed (m/min), xP, yP, nP are coefficients dependent on the machine type and cutting tool materials. The parameters were determined by statistical analysis of machining data. We use artificial neural network (ANN) to model the cutting force, which will be explained in the next section. The feed force Ff is calculated with the following equation (Zhengyi and Yonghua 2003):

F f = µ Fc

3-6

where µ is a coefficient.

54

Since it is a single-point tool and orthogonal cutting process, only the tool tip is considered to cut off workpiece layer by layer when simulating the cutting process. So in this system the geometry for the cutting proxy is point-based.

3.1.3.1 Structures of artificial neural network Biological neural networks are composed of simple processing elements (neurons), operating in parallel, which are inspired by biological nervous systems and whose behaviors are determined by the connections between the processing elements and element parameters. Artificial neural networks (ANNs) are computational models based on biological neural networks. An ANN is an adaptive system consisting of an interconnected group of artificial neurons. It processes information using a connectionist approach and changes its structure based on the information that flows through the network during the learning procedures. ANNs are usually trained so that a particular input leads to a specific target output. The network is adjusted based on a comparison of the output and the target as shown in Figure 3-8. Figure 3-9 shows the diagram of a neuron with a single scalar input, p and bias,

b. The scalar input p is transmitted through a connection. The connection multiplies p by the scalar weight w and adds the bias b to produce a weighted input wp+b. Then, the weighted input is transmitted to a transfer function f, which produces the scalar output a. Note that w and b are both adjustable scalar parameters of the neuron. The transfer function f is typically a step function or a sigmoid function. The sigmoid transfer function as shown in Figure 3-10 takes any value between plus and minus infinity as a input, and squashes the output into the range 0 to 1.

55

Figure 3-8 Training an ANN.

Figure 3-9 Diagram of a neuron.

Figure 3-10 Sigmoid transfer function.

56

A feed-forward ANN can have multiple layers as depicted in Figure 3-11. Each layer has a weight matrix W, a bias vector b, and an output vector a. A network of two layers, where the first layer is sigmoid and the second layer is linear, can be trained to approximate any function arbitrarily well, given sufficient neurons in the hidden layer. This kind of two-layer network is used extensively in network using algorithm of Back Propagation (BP) (Rumelhart, Hinton et al. 1986), which was created by generalizing the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. We use this structure to model the cutting force as shown in Figure 3-12. It has three inputs, cutting velocity, feedrate, and cutting depth. The hidden layer has more than three neurons. The output is the force. These two layers are connected with weight matrices.

Figure 3-11 Structure of multiple-layer ANN.

57

Figure 3-12 Structure of the two-layer ANN for cutting force model.

3.1.3.2 Learning strategy There are many algorithms available for training ANN models, most of which are based on optimization theory, employing some form of gradient descent. They simply take the derivative of the performance function with respect to the weight parameters. Then, they adjust those parameters in a gradient direction. Standard BP uses this method to find the solution, in which the network weights are moved along the negative of the gradient of the performance function. The basic BP learning law is a gradient-descent algorithm based on the estimation of the gradient of the instantaneous sum-squared error for each layer. There are two different ways in which this gradient-descent algorithm can be implemented: incremental mode and batch mode. In the incremental mode, the gradient is computed and the weights are updated after each input is applied to the

58

network. In the batch mode all of the inputs are applied to the network before the weights are updated. Traingd is the batch steepest descent training function in MATLAB® (MathWorks). The weights and biases are updated in the direction of the negative gradient of the performance function. The learning rate is multiplied times the negative of the gradient to determine the changes to the weights and biases. The larger the learning rate, the bigger the step. If the learning rate is made too large, the algorithm becomes unstable. If the learning rate is set too small, the algorithm takes a long time to converge. This method uses an instantaneous sum-squared error to minimize the mean squared error over the training epoch. The gradient of the instantaneous sum-squared error is not a good estimate of the gradient of the mean squared error, which is a relatively complex surface in the weight space, possibly with many local minima, flat sections, narrow irregular valleys, and saddle points. Therefore, satisfactory minimization of this error typically requires many repetitions of the training epochs, or it will oscillate around a local minimum. The basic method, traingd is often too slow for practical problems. Therefore, several high performance algorithms that can converge faster were proposed. These faster algorithms fall into two main categories (MathWorks). The first category uses heuristic techniques. One heuristic modification is the momentum technique, e.g. traingdm. The second category of fast algorithms uses standard numerical optimization techniques, including conjugate gradient (trainscg), quasi-Newton (trainbfg), and Levenberg-Marquardt (trainlm). Momentum of traingdm allows a network to respond not only to the local gradient, but also to recent trends in the error surface. Acting like a low-pass filter, 59

momentum allows the network to ignore small features in the error surface. With momentum a network can slide through such a minimum, avoiding getting stuck in a local minimum. In the conjugate gradient algorithms a search is performed along conjugate directions which determine the step size. The basic step of Newton' s method is: x k +1 = x k − Ak−1 g k

3-7

where Ak−1 is the Hessian matrix (second derivatives) of the performance index at the current values of the weights and biases. Newton' s method often converges faster than conjugate gradient methods. Unfortunately, it is complex and expensive to compute the Hessian matrix. Quasi-Newton methods are based on Newton' s method, but which don not require calculation of second derivatives. These algorithms can converge faster than conjugate gradient methods at the cost of more computation and more storage. Like the quasi-Newton methods, the Levenberg-Marquardt algorithm was designed to approach second-order training speed without having to compute the Hessian matrix. When the performance function has the form of a sum of squares (as

is typical in training feed-forward networks), then the Hessian matrix can be approximated as: H = JTJ

3-8

and the gradient can be computed as: g = J Te

3-9

where J is the Jacobian matrix that contains first derivatives of the network errors with respect to the weights and biases, and e is a vector of network errors. The

60

Jacobian matrix can be computed through a standard BP technique that is much less

complex than computing the Hessian matrix. It is very difficult to tell which training algorithm will be the fastest for a given problem. It will depend on many factors, including the complexity of the problem, the number of data points in the training set, the number of weights and biases in the network, the error goal, and whether the network is being used for pattern recognition or

function

approximation.

Through

several

experiments

in

MATLAB®

(MathWorks), it is reported that the fastest algorithm for this problem is the Levenberg-Marquardt (LM) algorithm. On the average, it is over four times faster than the next fastest algorithm. LM algorithm is best suited for a function approximation problem where the network has less than one hundred weights. In our force modeling, we use this method to search for optimal parameters of the ANN.

3.1.3.3 Training BP network

We use the experimental machining data from (Lin, Lee et al. 2001) to train the two-layer BP network with different number of nodes in hidden layer. Table 3-1 lists the partial data of cutting forces related to corresponding cutting conditions: cutting velocity, feed rate, and cutting depth. These data are directly measured from the experiments. More than 200 pieces of data are input into the network for training the BP network. The training process of BP networks requires a set of examples - network inputs

p and target outputs t. During training the weights and biases of the network are iteratively adjusted to minimize the network performance function, which is chosen to be mean squared error (MSE) - the average squared error between the network outputs and the target outputs. Figure 3-13 shows the training performance of the BP 61

network. In that network, we have 4 neurons in the hidden layer. The MSE goal is set to be 0.001. Levenberg-Marquardt algorithm is used to find the optimal parameters. It can be seen from the figure that the algorithm can quickly achieve the goal within 20 steps.

Table 3-1 Cutting forces with cutting conditions. Courtesy of (Lin, Lee et al. 2001).

After the BP network is trained, it can be used to predict the cutting forces with different inputs of cutting conditions. Figure 3-14 depicts the relation between cutting force and feed rate with a constant cutting velocity and cutting depth. It is almost a linear function with a positive slope at the feed rate range from 0.05 to 0.3 (mm/rev). It can be seen from Figure 3-15 that between the cutting depths from 0.6 to 1.3 (mm) the relation between cutting force and cutting depth is a linear function. From 0.4 to 62

0.6 (mm), the cutting force increases quickly as the cutting depth increases. The relation between cutting force and cutting velocity is almost a linear function with a negative slope as shown in Figure 3-16.

Figure 3-13 Mean squared error vs. epoch in ANN training.

Figure 3-14 Cutting force vs. feed rate.

63

Figure 3-15 Cutting force vs. cutting depth.

Figure 3-16 Cutting force vs. cutting velocity.

64

Figure 3-17 Block diagram of cutting force simulation using BP network.

3.1.3.4 Haptic simulation using BP network

We separate the training program of BP network from our simulation program where real-time calculation is required as shown in Figure 3-17. The training program and simulation program are both written with MATLAB® using toolbox of neural network (MathWorks). In MATLAB® environment, we train the BP network using training data and export the trained network structure to the simulation module. At the same time, we convert the network simulation in MATLAB® file to Component Object Model (COM), which is an interface standard introduced by Microsoft® in 1993. It is used to enable inter-process communication and dynamic object creation in 65

any programming language that supports the technology. In our real-time simulation system, we first load the COM from the file, which will load the trained network. Then, the system will call the COM at every simulation step to calculate the force, providing it with updated cutting conditions. The frequently used spring-damper system is good for modeling elastic deformation that can be defined by one or two parameters. Whereas the ANN provides a good modeling tool for situations that the output force is influenced by more than two parameters in a non-linear pattern. The trained BP neural network is used for the haptic simulation of turning as will be described in Chapter 5.

3.2 Surface-based haptic rendering In virtual assembly and planning, it always involves object-object interactions. For example, in the off-line robotic path planning, a user manipulates a virtual robot arm by means of a haptic interface to approach a car frame. If there is a collision, forces and torques will be calculated and sent to the user. As a result, the user can feel collision forces, which can remind him/her not to select this position as a part of the path. Therefore, haptic cues can provide useful information in manipulation of virtual objects in complex virtual environments. Based on efficient collision detection methods as will be explained in the next section, two haptic rendering methods, distance-based and simulation-based haptic rendering, are proposed in this thesis.

66

3.2.1 Collision detection A proximity query package (PQP) (Larsen, Gottschalk et al. 2000) based on bounding volume hierarchies (BVHs) is used for collision detection. It utilizes

different swept spheres as bounding volumes (BVs) to construct a hybrid hierarchal data structure. BVs are used to enclose sets of primitives and BVH is a tree structure used to store BVs in nodes. The root of BVH contains all primitives of a model. Its children contain portions of the model, and leaves contain a single primitive. Swept sphere volumes are generated by taking the Minkowski sum or convolution of a core primitive and a sphere. There are two types of BVs used in PQP: oriented bounding box (OBB) and rectangle swept sphere (RSS) as depicted in Figure 3-18(a), which can provide more tightness of fit to the underlying primitives. Given a set of triangles, statistical techniques, e.g. computing the mean and covariance of vertex coordinates are used to calculate BVs, which encloses the underlying primitives. A top-down strategy is utilized to construct the BVH (Figure 3-18(b) and (c)). The triangles in each node of the tree are recursively split into two subsets, until they contain only a single triangle. PQP uses bounding volume test tree (BVTT) to perform collision or distance checking. Each node in the BVTT represents a single collision or distance test between a pair of BVs as depicted in Figure 3-18(d). Two techniques, priority directed search or coherence are applied in PQP in order to achieve speedups of query. In priority directed search, a priority queue is used to schedule query order of BVH according to their distances between the queried pairs of BVs. The closest pair is given with the highest priority. In many applications, e.g. real-time simulation, the distances of objects to be queried only change slightly between successive frames. Therefore, it is useful to record the closet triangle pair from previous query for initialization of the minimum distance, . 67

(a) OBB and RSS.

(b) Construction of a BVH.

(c) BVH.

68

(d) BVTT.

Figure 3-18 Data structure of collision detection.

3.2.1.1 Compute the distance between OBBs

Given two OBBs, A and B (Figure 3-19(a)), and the half dimensions of A and B, ai and bi, where i = 1, 2, 3. The unit vectors, Ai and Bi, denote the axes of A and B. T

is a vector from the center of A to the center of B. If A and B are disjoint, a separating axis, L, exists. Project the centers of A and B onto the axis, and compute the radii of the intervals, La and Lb, as the following equation.

La =

3 i =1

| ai Ai ⋅ L |

3-10

The calculation of Lb follows the similar equation. A and B are disjoint if and only if:

| T ⋅ L |> La + Lb

3-11

69

3.2.1.2 Compute the distance between RSSs

The distance between two RSSs, drss can be obtained by computing the distance, d, between the core rectangles of RSS, and subtracting the sum of their radii, ra and rb, as explained in Figure 3-19(b).

d rss = d − ra − rb , ( d ≥ ra + rb )

3-12

Figure 3-19 Distance computation.

70

3.2.1.3 Performance analysis PQP can be used to perform four types of proximity queries on a pair of

geometric models composed of triangle facets: •

distance computation (C1): compute the minimum distance between a pair of models, e.g., the distance between the closest pair of points.



tolerance verification (C2): determine whether two models are closer or farther than a tolerance distance or not.



collision verification (C3): detect whether the two models overlap or not.



collision detection (C4): detect all of the overlapping triangles of the two models.

In collision detection, the algorithm recursively traverses the BVHs of the geometric models and checks whether two BVs, e.g. A and B are overlapped for each recursive step. If A and B do not overlap, the recursive branch is terminated. Otherwise, the algorithm will be applied recursively to their children. In distance computation, the process of query is similar to that of collision detection. Sample comparisons of these four PQP modes have been done based on a virtual environment as shown in Figure 3-20 with geometric data listed in Table 3-2. Three scenarios are considered. In scenario #1, the robot and the obstacle (e.g. a car frame) are disjoint with a distance larger than a threshold, h. In scenario #2, the distance is smaller than h. And in scenario #3, the robot collides with the obstacle. In Table 3-3, Table 3-4, and Table 3-5, Ci (i=1, 2, 3, 4) stands for the proximity query type; Nobv is the abbreviation of queried number of bounding volumes, and Notri is the queried number of triangles. As can be seen from the tables, the distance computation method (C1) is most time-consuming because it always detects a lot of bounding 71

volumes and triangles whether the robot collides with the obstacle or not. When the robot approaches the obstacle till collision happens, C1 needs less time to compute the minimum distance. While for collision detection method C4 the number of queried bounding volumes and triangles increases rapidly. Tolerance verification method C2 and collision verification method C3 are more stable in these three scenarios in comparison with C1 and C4. C2 has an update rate from 200 Hz to 500 Hz. Though there is still a gap to the haptic update rate (1000 Hz), some interpolation techniques can be utilized to speed up the computation.

Figure 3-20 A virtual environment for testing collision detection.

Table 3-2 The number of triangles for each object in the virtual environment. 72

Table 3-3 Performances of PQP in scenario #1.

Table 3-4 Performances of PQP in scenario #2.

Table 3-5 Performances of PQP in scenario #3.

73

3.2.1.4 Collision response

Special contact joints are used to handle collision response, which prevent two rigid bodies from inter-penetrating at contact points. Contact joints typically have a lifetime of one time step. They are created and deleted in response to collision detection. Figure 3-21 shows the collision handling process. Before each simulation step, the collision detection module is called. It returns a list of contact points. Each contact point specifies a position in space, a surface normal vector and a penetration depth. Then a special contact joint is created for each contact point. The contact joint is given extra information about the contact, for example the friction present at the contact surface, how bouncy or soft it is and various other properties. After a simulation step, all contact joints are removed from the simulation system.

Figure 3-21 Collision response.

74

3.2.2 Distance-based haptic rendering When a user manipulates a virtual robot to approach obstacles within a small distance, σ , forces and torques can be felt by the user when the robot is close enough to an obstacle. The haptic rendering method is based on the distance computation between a robot and obstacles (Figure 3-22) which has been elaborated in previous section. The force and torque are computed by:

F=

k f (| d i | −σ )d i

Fi = i

3-13

i

where kf is the force stiffness constant, |di| is the minimum distance between robot and obstacle, and σ is the threshold.

M=

Mi = i

kt ( Fi × OPi )

3-14

i

where kt is the torque stiffness constant, O is the center of obstacle, and Pi is the closest point to obstacle.

Figure 3-22 Schematic of direct haptic rendering method.

75

Forces and torques are recorded when the user manipulates the robot arm through the haptic interface, such as the one shown in Figure 3-23. It shows the force and torque change twice when the user manipulates the robot to approach the car frame in x-y plane from top to bottom as shown in Figure 3-23. Collision occurred two times, as can be clearly seen from the two peaks in the plots. The magnitudes of force in x- and y-axis are big enough to remind the operator of the collision. 0.6 0.5 0.4

Force (N)

0.3 Fx Fy Fz

0.2 0.1 0 -0.1

1

16 31 46 61 76 91 106 121 136 151 166 181

-0.2 -0.3 Time (s)

(a) Plots of forces. 0.6 0.4

Torque (N*mm)

0.2 0 -0.2

1

16 31

46 61 76

91 106 121 136 151 166 181

-0.4 -0.6

Tx Ty Tz

-0.8 -1 -1.2 -1.4 Time (s)

(b) Plots of torques. Figure 3-23 Plots of forces and torques recorded during robot manipulation.

76

3.2.3 Simulation-based haptic rendering Figure 3-24 shows the flowchart of the simulation-based haptic rendering method. The haptic thread which runs at 1 kHz is decoupled from simulation thread which runs at graphic refresh rate by means of direct proxy rendering method as will be explained in this section. This technique makes haptic rendering independent of complexity of collision detection and response. We use ODE as the module for the rigid body dynamic simulation, which provides collision detection and response. Local virtual coupling is used for calculating force and torque for rigid body, which links the grasped rigid body and haptic probe with a virtual spring and damper. The rigid body grasped by a user can be used to interact with a complex virtual environment in the simulation.

Figure 3-24 Flow chart of simulation-based haptic rendering.

77

3.2.3.1 Joint coupling

The joint coupling method as explained in Figure 3-25 is designed to render forces and torques during manipulation of a virtual robot. The general idea of joint coupling is very simple. It links the joints of a haptic interface and a virtual robot with springs and dampers. When the robot is manipulated by a user and he/she moves the haptic interface, corresponding joints of virtual robot will also rotate towards the transformation of the haptic interface, driven by the torques calculated in terms of joint angle differences of the haptic interface and the robot. And at the same time, the torques are sent to the motor servers of the haptic interface to render inertia forces during manipulation of the robot arm in a free space, which will greatly increases manipulation realism and decrease possible jerk operations. Therefore, safety can be guaranteed, which is very important in tele-operation. If the user stops moving, then after several simulation loops the transformation of the robot will coincide with that of the haptic interface and there is no force and torque exerted on the haptic interface. If the user manipulates the robot to contact other objects, then the end effector of virtual robot will stop according to contact constraints of ODE. However, the actual position of the end effector of the haptic interface will penetrate the virtual object and the torques calculated by joint coupling will be exerted on the six joints to pull the haptic interface back. Therefore, the user can feel force and torque when contacting objects, which is very similar to real haptic tele-operation. The joint coupling torque Tc for haptic interface and robot are calculated with the following equation:

Tc = kc (θ h (t ) − θ r (t − T )) − d c (ωh (t ) − ωr (t − T ))

78

3-15

where kc is the stiffness, h

h

is joint angle of haptic interface,

is angular velocity of haptic interface,

r

r

is joint angle of robot,

is angular velocity of robot and T is time

delay of communication.

Figure 3-25 Joint coupling.

Figure 3-26 Six joints of the haptic interface.

79

In order to test the haptic rendering method of joint coupling, six experiments are carried out. In each experiment, we rotate one of the joints forward and back around the axis and record the torques calculated by this method as shown in Figure 3-26. In haptic rendering, we send values of Digital-to-Analog Converter (DAC) (Technologies 2004) which are proportional to the calculated torques to motor servers using HDAPI routines. The DAC values range from -32768 to 32768. In these experiments, the time delay T is ignored. The results as shown in Figure 3-27 demonstrate that joint coupling method can stably display forces and torques in the simulation of tele-operation with good communication condition. Time delay of communication is critical in tele-operation. Therefore, the effect of time delay T to the stable haptic rendering is also investigated in the following experiments. In the first experiment, the time delay is set to 500 ms. We also rotate joint #1 around its axis forward and back and record its DAC value calculated by joint coupling method. As can be seen from Figure 3-28(a) in comparison to Figure 3-27(a), we need much more torque to rotate the joint and the user must move slowly. Otherwise, the haptic interface will be instable and sometimes it reports error because of motor overload. In the second experiment, T is set to 1000 ms. Even we more slowly move, large vibration is also witnessed as illustrated from Figure 3-28(b).

80

Figure 3-27 Calculated motor DAC values for the 6 robot joints by joint coupling method.

81

Figure 3-28 Simulation of time delay of communication.

3.2.3.2 Local Virtual Coupling (LVC)

The virtual coupling force Fc and torque Tc between haptic probe and rigid body are calculated with the following equations: 82

Fc = kc ( xh + xoff − xr ) + d c (vh − vr ) dQ = Qr−1 (QhQoff ) = ( s, w ) Tc = 2kθ sw + dθ (

h



r

3-16

)

where kc is the stiffness, xh is the position of haptic probe, xr is the position of rigid body, xoff is the initial position offset between haptic probe and the rigid body when the user grasps it by pressing the button on the haptic arm, dc is the damping coefficient, vh is linear velocity of haptic probe, vr is the velocity of rigid body, Qr is the quaternion of rigid body, Qh is the quaternion of haptic probe, Qoff is the quaternion offset between haptic probe and the rigid body at beginning, s is the scalar component of quaternion, w is the vector component of quaternion, k is the angular stiffness, d is the angular damping coefficient, probe,

r

h

is the angular velocity of haptic

is the angular velocity of rigid body.

3.2.3.3 Remote Virtual Coupling (RVC)

Recently, haptic interaction over network begins to attract more attentions with the advent of hardware and software in haptic interfaces. This technique can be used in tele-manipulation and haptic collaboration, in which two or more participants collaborate to complete a specific task. Several studies have demonstrated the effectiveness of haptic interaction over network in various applications, such as telemanipulation of remote objects (Kim, Kim et al. 2004), path planning (Sheridan 1992), and collaboration (Basdogan, Ho et al. 2000). However, it is still a challenging issue to build haptic shared virtual environments (HSVEs) in that current 3-DOF haptic rendering methods cannot fully support behavior-rich interaction in SVEs. Hence, we

83

try to introduce 6-DOF haptic rendering techniques into haptic interaction via network by means of RVC, inspired from LVC as mentioned in above section. The basic idea of RVC is to project remote haptic probe via network to local computer and link the projected remote haptic probe with the local rigid body with a virtual spring and damper as shown in Figure 3-29. We use two threads to send local haptic probe positions and orientations, and receive data from remote machine for local dynamic simulation respectively as shown in Figure 3-30. When the local machine is connected to the remote machine, these two threads will start to run asynchronously. Positions and orientations instead of forces are transferred via network. When the receiving thread receives the positions and orientations from a remote haptic probe, the local simulation process can get this information to project it to a proxy. Then we can couple this proxy and local dynamic object as the LVC does. As a result, a remote user can manipulate the local object to interact with a local user.

Figure 3-29 Remote virtual coupling.

84

Figure 3-30 Block diagram of RVC.

3.2.3.4 Direct proxy rendering

Intermediate representations (Adachi, Kumano et al. 1995) are very successful in improving the stability and responsiveness of haptic rendering systems. The general idea is to decouple the dynamic simulation loop from the haptic thread (about 1 kHz) by means of a simple proxy that approximates the position of haptic probe. The proxy state is updated at a graphic refresh rate (about 30 Hz) by dynamic simulation. Therefore, haptic rendering at the required high refresh rate can be easily achieved. A similar method called direct proxy rendering is proposed for 6-DOF haptic rendering in this paper as shown in Figure 3-31. The proxy representing the position of a rigid body is calculated in a simulation thread and the haptic rendering captures the proxy state in 1000 Hz update rate to calculate the force and torque with the state of haptic device based on a spring and damper model. The translation xp and rotation Qp of the proxy are calculated with the following equations:

x p = xr − xoff

3-17

−1 Q p = Qr Qoff

85

where xr is the position of rigid body, xoff is the initial position offset between haptic probe and the rigid body when the user grasps it, Qr is the quaternion of rigid body, Qoff is the quaternion offset between haptic probe and the rigid body at beginning.

Figure 3-31 Direct proxy rendering.

Figure 3-32 Experiments of haptic interactions.

86

3.2.3.5 Experiments

In order to test the performances of the haptic rendering method, we conduct several simulation experiments. One of the experiments is to control one rigid body (a yellow spine model) with a haptic probe to touch a static rigid body (a gray spine model) as shown in Figure 3-32. In this experiment, two contacts are done, at the same time, the corresponding forces and torques calculated by direct proxy rendering and local virtual coupling methods are recorded. Obviously the force and torque applied to the haptic probe and rigid body being controlled are opposite in orientations as illustrated by Figure 3-33 and Figure 3-34, though there is little difference in torques which might ascribe to different damping coefficients.

(a)

87

(b) Figure 3-33 Plots of forces and torques calculated by direct proxy rendering method during two times touch.

Force (N)

6 4

Fx Fy

2

Fz

0 -2

1

33 65 97 129 161 193 225 257 289 321 353 385 417 449 481

-4 -6 -8 -10

Time (20ms)

(a)

88

70

Tx

60

Ty Tz

Torque (mmN)

50 40 30 20 10 0 -10 1 -20 -30

34

67 100 133 166 199 232 265 298 331 364 397 430 463 496 Time (20ms)

(b) Figure 3-34 Plots of the force and torque calculated by local virtual coupling method.

Figure 3-35 Experiments of haptic interactions through Local Area Network.

In the second experiment, a local user gasps one object (a yellow spine model) and the remote user grasps another one (a gray spine model) and they can interact with each other with force and torque feedback as shown in Figure 3-35. This experiment demonstrates that by means of RVC, stable 6-DOF rendering in shared virtual environments is possible and it improves the richness of haptic interaction via network. Figure 3-36 shows the plots of force calculated by LVC and RVC in the experiments as we freely manipulate an object and record the forces calculated by LVC, and at the same time in remote machine we record the forces calculated by 89

RVC for the object manipulated by the local machine. There seems to be some noises on the forces calculated by RVC and its responsiveness is not as good as that of the one calculated by LVC. This can be attributed to the network latency and jitter. Maybe some filtering method can be applied to get rid of the noise. However, RVC is a stable 6-DOF haptic rendering method in SVEs as has been illustrated in this example.

(a) plots of forces calculated by RVC for the object which is freely moving in the remote machine.

(b) plots of corresponding force calculated by LVC for the object in local machine.

Figure 3-36 Plots of forces calculated by LVC and RVC. 90

3.3 Volume-based haptic rendering Some haptic simulations such as drilling need volume based haptic rendering methods. Developing a mathematical model for the realistic simulation of material removing is difficult since the force largely depends on drill bit geometry and various other drilling parameters. Studies on cutting simulation can also be found in the field of medical simulation, e.g. bone drilling simulation. In earlier research on bone drilling simulation (Wiggins and Malkin 1976; Hobkirk and Rusiniak 1977; Allotta, Belmonte et al. 1996), it is found that important parameters for a realistic force model are related to the drilling speed, types of drills, feed rate and the material properties of bone. Wiggins and Malkin (Wiggins and Malkin 1976) investigated the interrelationships between thrust pressure, feed rate, torque, and specific cutting energy (the energy per unit volume required to cut the material) for three types of drill bits. Although they present power equations for penetrating ability, torque per unit area and specific cutting energy, many of the variables used in these equations must be determined experimentally for each material. Allotta et al. (Allotta, Belmonte et al. 1996) presented a theoretical equation for thrust force required to drill a hole and reported a good correlation with experimental data. In (Agus, Giachetti et al. 2003) a parametric model is proposed using Hertz’s contact theory. Elastic and frictional forces are induced and applied to calculate the reflected force to the haptic device. Spring-damping model is extensively used in haptic rendering for polygonal models. This model is used in (Eriksson, Flemmer et al. 2005) to simulate bone dissection. Inspired from these studies, a volume-based haptic rendering method for drilling simulation is presented in this thesis. The contribution of this model consists in that the relations between drilling force and model resolution, erosion model, the radius of drill head, drilling condtions are investigated in detail. 91

3.3.1 Resolution of volumetric model

Figure 3-37(a) shows a typical ball-end drill bur, Tomahawk Abrasives 55238 ball carbide rotary, which is geometrically modeled as a combination of a sphere and a cylinder as shown in Figure 3-37(b). In the simulation context, the drill tool is pushed through a block of part with a constant speed. The resolution of a volumetric model will affect the stability of haptic rendering. If the resolution is too low, undesirable vibration will occur, which is illustrated by the simulation examples as shown in Figure 3-38. In these examples, the diameter of the drill bit is 6 mm, and the feed-rate is 1 mm/s.

(a)

(b)

Figure 3-37 Schematics of drilling tool.

92

(a)

(b)

93

(c) Figure 3-38 The influence of model resolution to force modeling.

There are three stages in drilling through a part as can be seen from Figure 3-38. In the first stage, the drill head begins to touch the part and build up the thrust force before penetration. The thrust force increases with a near linear relationship against time. In stage two, the drill head is fully immersed in the part and the thrust force is a relatively stable value within a range which is related to the part model resolution. It can be seen from the comparison in Figure 3-38 that lower model resolution leads to larger vibration. It might be explained as that if the voxel size is bigger and it is removed away, then there is bigger empty gap left between this voxel and its neighbors, meaning that the drilling force will decrease more quickly. Therefore, bigger vibration will occur if the model resolution is lower. Therefore, high model resolution is preferred in the simulation in order to alleviate the large vibration

94

problem. If the resolution is high enough, the force curve in this stage will be close to a straight line segment. However, the resolution of model can not be too high because it is limited by computation speed, memory capacity of the computer and nominal position resolution of the haptic device. On the other hand, small vibrations during stable drilling stage are commonly observed in many studies on measurement of drilling. Hence, the problem is how to choose an appropriate model resolution which can avoid large vibration problem and at the same time can mimic small vibrations in real drilling procedure. In Figure 3-37(b), the model is composed of cubes and the drill tool is modeled with a sphere and a cylinder. At time t1, the drill head is represented as a shaded combination of a rectangle and a cycle. At time t2, the drill head is represented as a dashed one. In every haptic rendering loop, we calculate the tool travel distance d with the following equation:

∆d = ∆t ⋅ α = α / q where

3-18

is feed rate (mm/s) and q is haptic frequency, and t is the time interval t2-t1.

Typical values are 1000 Hz for q, and 1 mm/s for . This works out d=0.001mm for a typical haptic loop. Assume that the time of traveling through a voxel resolution equals to the time of drilling away a voxel based on the erosion model, we get the following equation:

l / ∆d = s / ∆s

3-19

where s is the original volumetric scalar value, s is the erosion value, and l is the size of a voxel. Using above equations, we get the voxel resolution:

l = s ⋅ α /( ∆s ⋅ q )

3-20

95

For s = 255,

s =1, =1 mm/s, and q=1000 Hz, the desired model resolution is

l=0.255 mm based on the above equation.

3.3.2 Collision detection and volumetric interaction Collision detection must be performed in real time. Hence an efficient collision detection method is critical for stable haptic rendering. Our collision detection is based on implicit function and Oriented Bounding Box (OBB) tree structure. For the part drilling case, the drill tool is represented as a 3D object modeled from a sphere and a cylinder, representing the drill head and the shank respectively. The sphere is used to remove material volume; thereby both drilling forces and torsions are generated and conveyed to the user as shown in Figure 3-39.

Figure 3-39 Schematics of collision detection and force calculation.

The volumetric model is organized with an OBB-tree structure to accelerate collision detection. The collision between the drill head and a voxel is detected by checking whether the distance between them is smaller than the drill head radius r. If 96

yes, then the scalar value of the voxel is reduced according to the erosion model as will be explained in the next section.

3.3.3 The erosion model The collision between the drill bit and a voxel is detected by checking whether the voxel is within the effective cutting part volume as shown in Figure 3-39. If yes, then this voxel with original scalar value s is decreased by a value,

s, which is

approximated by the following equation:

∆s = η ⋅ ∆d ⋅ s / l where

3-21

is a constant, d is calculated by Eqn. 3-18 and l is the voxel size.

The influence of erosion rate to drilling force as has been defined by the above equation is depicted in Figure 3-40. In the simulation experiments, the drill tool (Figure 3-37(b)) modeled by a sphere with diameter of 6 mm, is pushed through a block of part with a constant speed, 1 mm/s. As can be seen from the comparisons, when the erosion rate increases, the drilling force will become more unstable. The magnitude of vibration can be as large as 10% of the drilling force in the case of erosion rate of 4/255. If the erosion rate is larger than 4/255, the drilling force will even become more unstable.

97

(a)

(b)

98

(c) Figure 3-40 Erosion rate vs. drilling force.

3.3.4 Force modeling Allotta et al. (Allotta, Belmonte et al. 1996) presented a theoretical equation for thrust force required to drill a hole and reported good correlation with experimental data. The thrust force F and torque Tz are described as:

F = K sα

D β sin 2 2

Tz = 5 Ruα

3-22

D2 8

3-23

where Ks is the specific cutting energy, α is the feed rate expressed as unit length per revolution, D is the diameter of the drill bit,

is the convex angle between the main

cutting lips and Ru is the unitary ultimate tensile load.

99

In our method, the thrust force Ft is calculated with the following equation (to see Appendix A):

Ft =

k p ⋅ l3

n

OX i si ( f + 0.5k tc dω )t i =1 | OX i |

3-24

where kp is the unit power consumption, ktc is a constant related to cutting conditions, f is the feed rate (mm/s), l is voxel size, d is the drill head diameter, O is the current

drill head center, Xi is the ith voxel position, si is the scalar value of the ith voxel,

is

the angular rotating velocity of the drill, and t is the time step. Using this force model, we conducted several simulation experiments to investigate the relations between force and tool radius, cutting feed rate.

From

Figure 3-41(a), we can see that the force and tool radius has a positive linear relation. This seems to be opposite to Eqn. 3-24, where before the summarization term the radius or diameter is in the position of denominator. It might be explained as that the removed volume of the voxels which is represented by the summarization term in the above equation has a square term of tool radius. As can be seen from Figure 3-41(b), the relation between force and feed rate has a similar trend. When the feed rate is smaller than 6 mm/s, the relation is almost positive linear.

100

(a)

(b) Figure 3-41 Plots of force vs. tool radius (a) and feed rate (b).

101

3.3.5 Torque modeling The torque in drilling T is calculated as (to see Appendix A):

T=

k tc ⋅ k p ⋅ d ⋅ l 3 (2 f + k tc dω )t

n i =1

(OX × OOi') si

3-25

where kp is the unit power consumption, ktc is a constant related to cutting conditions, l is voxel size, d is the drill head diameter,

is the burr angular rotating velocity, f is

the feed rate (mm/s), t is the time step, si is the scalar value of the ith voxel, O is the current drill head center, Xi is the ith voxel position, and OO’ is the unit vector of drill shank axis. We also investigate the relations between the drilling torque and tool radius, feed rate through several simulation experiments using the above equation. As can be seen from Figure 3-42(a), the relation between torque and tool radius is almost quadratic. And the relation between torque and feed rate is almost linear.

(a)

102

(b) Figure 3-42 Plots of torque vs. tool radius and feed rate.

3.4 Conclusions and discussions Three kinds of haptic rendering methods are presented in this chapter, namely, point-based, surface-based, and volume-based haptic rendering methods. In pointbased haptic rendering, constraint-based algorithms, and a neural network-based cutting force rendering algorithm are presented. The limitations of proxy-based constraint methods are pointed out and a force-based snap algorithm is proposed in order to overcome such limitations. In rendering of touchable rigid surfaces, a springdamper model is used. In surface-based haptic rendering, using efficient collision detection methods, a distance-based method and a simulation-based method are proposed for calculating forces in object-object interactions. In the simulation-based method, the direct proxy rendering method decouples the rigid body dynamic simulation from haptic thread to alleviate the computation demand. Integrating Open 103

Dynamics Engine (ODE) into the simulation system, we can deal with various dynamic interactions with the haptic tool.

We also introduce the 6-DOF haptic

rendering technique into shared virtual environments in order to improve the richness of haptic interaction over network by means of remote virtual coupling method. In volume-based haptic rendering, the influence of the volumetric resolution and erosion to drilling forces are investigated. The relations between drilling forces and tool parameters, cutting conditions are also studied. In 6-DOF haptic rendering, the advantages of virtual coupling techniques are reduced interpenetration and higher stability. However, the main disadvantages are that the coupling may introduce perceptible haptic artifacts (Ortega, Redon et al. 2006), which can be mitigated by an implicit integration method (Otaduy and Lin 2005). In our research local virtual coupling and remote virtual coupling are proposed for stable 6-DOF haptic rendering. Therefore, haptic artifacts may be perceived by users when exploring complex shared virtual environments. How to reduce haptic artifacts in the system needs to be further studied. The work on haptic interaction via network is preliminary. Hence, how to extend our haptic rendering methods to complex shared virtual environment, e.g. with multiple dynamic objects and users, is still a challenging issue since the effect of network latency and jitter, synchronization need to be further investigated.

104

CHAPTER 4. HAPTIC-AIDED REVERSE ENGINEERING

The haptic rendering methods presented in Chapter 3 are used to model a number of product design and manufacturing processes. In this chapter, haptic modeling has been applied to deal with some difficult problems in reverse engineering. “Reverse engineering (RE) is the process of discovering the technological principles of a device, object or system through analysis of its structure, function and operation.” (Wikipedia 2008). It usually involves analyzing a device or software product in detail in order to produce a new one that has the same or improved functions. Nowadays, RE combined with CAD/CAM has become a viable method to create a 3D computer model of an existing physical part. Conventional engineering converts engineering concepts into real objects, however, in reverse engineering real objects are converted into engineering concepts. Typically, RE consists of four parts, data acquisition, preprocessing, segmentation and surface fitting, and CAD model creation (Varady, Martin et al. 1997). RE starts with data acquisition where a physical object is measured using 3D scanning technologies like coordinate-measuring machines (CMMs), laser scanners, and structured light digitizers or computed tomography. The measured data usually represented as a point cloud, lacks topological information and is therefore often processed and modeled into a more usable format such as a triangular mesh, a set of NURBS surfaces or a CAD model. Optical data acquisition methods are the most popular because of their relatively fast acquisition rates. In our lab, a 3D laser scanner, VIVID 700 (Konica Minolta®) which works on the principle of triangulation, is used for data capture as shown in 105

Figure 4-1. Triangulation is a method which uses location and angles between light sources and photo sensing devices to deduce positions. In the measuring process, laser beams are projected at a specified angle onto the surfaces of objects being scanned. A video camera senses the reflection from the surface and then the positions of surface points relative to a reference plane can be calculated by using geometric triangulation from the known angle and distances. The scanned objects are mounted on a rotational platform in order to produce multiple scans of the surface.

Figure 4-1 VIVID 700.

There are many factors affecting the optical scanners’ accuracies such as accessibility, occlusion, video resolution, distance from the measured surface, laser power and surface properties of artifact (Varady, Martin et al. 1997) . Accessibility is the issue of scanning data that are not easily acquired due to the configuration or topology of the part. Occlusion is the blocking of the scanning medium due to shadowing or obstruction. Incomplete data of model such as holes, gaps (Figure 4-2) are primarily raised by inaccessibility and occlusion. Noises on the boundaries of holes or gaps and sharp edges are not uncommon due to those factors mentioned above. Therefore, after triangulation, modifying these defects such as holes in the model is a necessary step in RE. 106

Figure 4-2 The scanned models with holes and gaps.

Many studies on hole-filling methods are focused on automatic model modification (Curless and Levoy 1996; Chui and Lai 2000; Davis, Marschner et al. 2002; Wang, Wang et al. 2002; Jun 2005; Rayevskaya and Schumaker 2005). Due to the complexity of region where holes are generated, automatic model re-construction methods can not give satisfactory results in dealing with holes for most of the cases. For example, Figure 4-3(a) and (c) show two models repaired incorrectly by means of automatic hole-filling methods from the FreeFormTM Modeling Plus™ system of SensAble Technologies® compared with the correctly repaired models in Figure 4-3(b) and (d) by means of our haptic-guided method. In Figure 4-3(a) holes are automatically filled but some data left “floating” on the green meshes. The triangular mesh model cannot be guaranteed to be manifold with this automatic method when repairing models with complex geometric deficiency. In Figure 4-3(c) the model feature, the ridge is not re-constructed. It is difficult for automatic methods to recover geometric features of a model when most information of these features is missing. 107

However, in the RE process, a user can easily find these geometric features because there is a real object being reconstructed which can be referred to. In my belief, a user should take the initiative in the hole patching process to get a satisfactory model reconstruction.

Figure 4-3 Comparison between the automatic and our haptic-guided hole-filling methods.

In the manual hole-filling process, a user needs to frequently interact with the computer model. However, three-dimensional (3D) interactive editing and modeling

108

of a 3D mesh model remains a problem of current CAD systems, since the screen, the desktop and the mouse only provide two-dimensional (2D) information. It is difficult to handle 3D points, lines, polygons and etc. with the traditional CAD interfaces. Recently many 3D editing and modeling interfaces have been proposed by using virtual reality (VR). These methods include shadow widgets (Herndon, Zeleznik et al. 1992), semi-transparent moving cursor plane (Jeng and Xiang 1996), a tracked stylus (Sachs, Roberts et al. 1991), a head mounted display (Butterworth, Davidson et al. 1992), etc. A novel haptics-based interface and sculpting system for physics-based geometric design was proposed by Dachille (Dachille, Qin et al. 2001). Haptics provides users a hand-based mechanism for intuitive, manual interactions with virtual environments towards realistic tactile exploration and manipulation. Using forcefeedback controls, designers, artists, as well as non-expert users can feel the model representation and modify the object directly as if in real settings, thus enhancing the understanding of object properties and the overall design. Therefore, the idea of introducing haptic techniques into RE is inspired. In this chapter, two haptic-aided hole-filling methodologies are explained, namely, a triangular mesh-based one which will be elaborated in section 4.1, and a volumetric-based one which will be presented in section 4.2. In section 4.3, we will compare these two methods and draw some conclusions.

4.1 Triangular mesh-based hole filling In this section, a novel triangular mesh-based hole-filling method based on haptics is proposed for RE. A prototype system has been implemented using a PHANToM® device from Sensible Technologies® as shown in Figure 4-4. This 109

system provides a user with a more intuitive and effective tool to modify the model than traditional CAD systems in RE.

Figure 4-4 The haptic-aided hole-filling system configuration.

4.1.1 The hole-filling process The hole-filling process consists of six parts: hole identification, boundary smoothing, stitching operation, polygon triangulation, triangle subdivision and mesh deformation as shown in Figure 4-5. A user can import a scanned model and the system will establish the relations between triangles automatically thereby identifying the holes. Then the system will re-generate the hole-boundary edges automatically and require the user to modify the positions of boundaries manually. After the smoothing operation, the user can decompose those complex holes into simpler holes by means of stitching operation. Then, an automatic polygon triangulation for hole boundaries is applied to generate triangular meshes in each of the simpler hole 110

regions. If the user is not satisfied with the result, he/she can sculpt the hole meshes. Finally, a watertight model is generated.

Figure 4-5 Flowchart of haptic hole filling.

4.1.2 Hole identification We first identify the mesh by boundary and inner meshes. Boundary meshes are composed of triangles which share boundary vertices. A boundary vertex can be found through inspecting boundary edges which are shared by only one triangle. Algorithm 4-1 shows how to separate boundary meshes and inner meshes. Figure 4-6 shows two examples of hole-boundary meshes identified in this system. For complicated models, missing data from 3D scanner are commonplace. 111

Boundaries with long and narrow and even isolated triangles like islands may appear such as those shown in Figure 4-6(b). Some superimposed vertices and invalid triangles can also be found due to incomplete data or noises and limitation of triangulation algorithms adopted by some scanner software. So it is necessary to check all these possibilities and correct them at the beginning.

Algorithm 4-1 Pseudo-code of mesh boundary searching.

Figure 4-6 Hole boundaries. (a) hole boundaries marked in green colour; (b) “islands and peninsulas” in a complex hole.

112

After boundary meshes are identified from the triangular meshes, boundary edges need to be found and sequentially ordered. In the algorithm of boundary edges classification and ordering as shown in Algorithm 4-2, we use a recursive algorithm to find the next and previous edge boundary as shown in Algorithm 4-3 and Algorithm 4-4.

Algorithm 4-2 Pseudo-code of boundary edge classification and ordering.

Algorithm 4-3 Pseudo-code of searching the next edge.

113

Algorithm 4-4 Pseudo-code of searching the previous edge.

4.1.3 Smoothing Points in boundaries are prone to be unreliable since these boundaries always appear in the fringe of concave regions of model and light of scanner reflected from model is feeble. The lengths of boundary edges differ from each other significantly, which goes against the following steps such as triangulation and sculpting. Furthermore the topologies in boundaries sometimes are incorrect such as intersection and unnecessary triangulation. Therefore, before the hole filling a smoothing operation should be applied to make boundary meshes smooth and rectify all incorrect topologies in boundaries. The boundary edges will greatly affect the results of the triangulation and sculpting. Hence, it is necessary to make sure that the sharp changes of edge positions and uneven distribution of edge points be avoided. A cubic spline interpolation based on the chord length parameterization is applied for automatic smoothing operation. After this automatic interpolation, the boundary edges are re-generated and the points 114

on this boundary are near evenly distributed. However, this automatic method cannot guarantee smooth boundaries and correct topologies. So the user may need to check and modify them manually through pulling or pushing those points to the desired positions. Resorting to haptic tool, picking and pulling or pushing 3D points becomes an easy task. When pulling a point, a user can feel the force provided by PHANToM® according to a spring model, and when picking a point, the user can not only see the point highlight, but also feel the contact. In the proposed system a haptic device called PHANToM® Desktop is used for selecting 3D points and triangles in an easy way. When selecting a point or triangle facet using the haptic device, all points and surfaces are touchable and deformable but are not penetrable. In the process of pushing or pulling a point, a user can feel the force provided by the PHANToM® device according to a spring-damper model. In boundary regions incorrect triangulations should also be corrected. A user can delete and re-triangulate a region with the aid of the haptic tool, which make the process easier than that picking a point in three dimensions with the traditional mouse-based interfaces. When a user manipulates the haptic stylus to poke a triangle, the haptic rendering is done by means of the proxy method as described in section 3.1.2 of Chapter 3. When the user touches a triangle and the button of the haptic stylus is pressed the corresponding triangle will be selected and deleted. A user can also select three points in the counter-clockwise direction to form a new triangle facet. To select a 3D point in a triangle with the haptic stylus, a user can touch the triangle first and then slide the stylus towards the target point and press the button when the haptic stylus is attached to the point as shown in Figure 4-7 in a similar way as in “snap to grid”. An attraction magnetic force as mentioned in section 3.1.1 of Chapter 3, is applied to 115

each point in order to facilitate point selection. A point has a force influence radius . When a user moves the haptic stylus to a position less than the radius , a magnetic force is sent to the haptic stylus thereby attracting it to that point and a dialog box will pop up to the user asking if this point should be selected. If yes, the point will be selected, otherwise, the force is released and the stylus can be used for other operations.

Figure 4-7 Point selecting method.

4.1.4 Stitching If a hole contains some missing information such as ridges or valleys, no automatic methods can give you satisfactory results. In these cases, manual stitching that provides some topologic clues is necessary. When dealing with complex holes, a user can manually separate a complex hole into several simple holes with the haptic stitching operation. Figure 4-8 shows two examples of stitching operation. After this operation, a long and narrow hole can be decomposed into two simple holes and a ridge is formed as shown in Figure 4-8(b) and (c). The stitching operation is useful when dealing with complex holes. A user can take advantage of this operation to eliminate islands and peninsulas to shape simpler holes as shown in Figure 4-8(d) and 116

(e). A spring-damper model is made to simulate stitching so that a user can feel the force as if he or she is pulling a thread. The magnitude of the force is proportional to the distance between the two vertices to be stitched as shown in Figure 4-8(a).

Figure 4-8 Stitching operation.

4.1.5 Polygon triangulation method An optimal polygon triangulation method based on classic ear-cutting algorithm for complex polygon triangulation in 3D is developed in this hole-filling system in order to provide a more robust method as shown in Algorithm 4-5 and Algorithm 4-6.

117

The basic ideas of this algorithm are: First, find the optimal ear of a polygon, and then cut it off from the polygon. This process can be run recursively until only three vertices are left and then finish the triangulation. The optimal ear is selected from all the ears of polygon based on constraints such as minimal distance, even angles, minimal normal variation. Although the time complexity of this algorithm is O(n2 ) , the edge number of polygon in our case is always small (complex holes are simplified by stitching operations) and the calculation time and memory space requirement are reduced due to the prior stitching process.

Algorithm 4-5 Pseudo-code of hole boundary triangulation.

Algorithm 4-6 Pseudo-code of ear-cutting method. 118

4.1.6 Triangle subdivision method After hole filling, the density of triangles in boundary regions are always much bigger than that of inner regions and slim or narrow triangles may be formed at the boundaries. Therefore subdivision technique is applied to these triangles so that subsequent deformation process can be done easier. A subdivision surface is a polygonal mesh that has been subdivided and smoothed. Every polygon of the mesh is sliced into two or more polygons, and each vertex is moved to a calculated position. The subdivision method used in this system is based on a modified Loop subdivision surfaces (Loop 1987). The calculations of edge points and vertices are based on the following equations: 3 1 V ji +1 = (V ji + V ji+1 ) + (V ji+ 2 + V ji+ 3 ) 8 8

Vki +1 = (1 − N α )Vki + α

N −1

V ji

4-2

j =0

α=

1 5 3 1 2π [ − ( + cos ) 2 ] N 8 8 4 N

where N is the number of the surrounding vertices. The edges on the boundary cannot be interpolated with points in view of topology consistency. The subdivision method picks up a triangle and checks whether its edges are on the boundary. If not, then it calculates with Loop edge vertices using Eqn.(4-2), to interpolate edge point. As to update of old vertices, the system also checks whether an old vertex is on the boundary. If so, this vertex is copied for next loop. Otherwise, it is updated with Loop vertex update calculation using Eqn.(4-1). Take a triangle ∆V0iV1iV2i for example as shown in Figure 4-9. If the triangle has two boundary edges V0iV1i and V1iV2i , then it only calculates the edge point V02i+1 on edge V0iV2i . Because all three vertices of this 119

triangle are on the boundary, these vertices are copied to V0i+1 , V1i+1 and V2i+1 respectively.

Hence,

this

∆V0i +1V1i +1V02i +1 and ∆V02i +1V1i +1V2i +1 as

triangle

could

be

divided

into

two

triangles

shown in Figure 4-9(a). If it has only one boundary

edge V1iV2i , then it calculates V01i+1 , V02i+1 and copies V1i , V2i to V1i+1 , V2i+1 respectively. And it updates the vertex V0i with Eqn.(4-1) to get V0i+1 . Comparing the length d1 of edge i +1 with the length d 2 of edge V2i +1V01i +1 , if d1 < d2 , then this triangle is divided into V1i +1V02

three triangles ∆V0i +1V01i +1V02i +1 , ∆V01i +1V02i +1V1i +1 and ∆V02i +1V2i +1V1i +1 as shown in Figure 4-9(b).

Figure 4-9 Boundary triangle subdivision.

4.1.7 Surface sculpting Because the automatic triangulation and Loop subdivision (Loop 1987) methods bring much artificial geometry into boundary regions, there is much distinction between the physical object and the model to be modified in the hole regions. Hence, a user should sculpt surfaces in hole regions with a haptic tool. Surfaces generated by

120

subdivision methods mentioned above can be easily deformed by editing internal vertices of the net at some level of the subdivision. Small scale deformations can be done by moving internal vertices in an advanced iteration of subdivision, while large scale deformation corresponds to moving vertices in one of the first subdivision iterations as shown in Figure 4-10. Thus, this approach allows free-form multiresolution editing, which is natural to subdivision schemes. Corresponding force sent to the haptic stylus is calculated by means of a spring model.

Figure 4-10 Sculpting operation.

4.1.8 Implementation and case studies A prototype hole-filling system based on the proposed methods has been implemented. In our implementation, the haptic and visual rendering machine is a desktop PC with dual 2.2 GHz CPU and 1.0 GB RAM. The haptic device in this system is a PHANToM® Desktop with 6 DOFs of position sensing and 3 DOF of 121

force feedback. The software is written in VC++, OpenGL and 3D Touch™ GHOST® API for graphic and haptic rendering. Using the proposed system, several scanned models with typical complex holes have been repaired successfully. An example for spine model reconstruction is shown in Figure 4-11. A user can feel force feedback if he/she touches the surface of the model as manifested by the blue cone cursor. Force feedback makes the smoothing, stitching and sculpting operations easier than traditional mouse manipulations. Figure 4-12, Figure 4-13 and Figure 4-14 show the other three examples of hole-filling results.

Figure 4-11 Hole filling for a spine model.

Figure 4-12 Hole filling for a jawbone model.

122

Figure 4-13 A fighter model repaired by our system.

Figure 4-14 A snoopy model repaired by our system.

123

4.2 Volume-based hole filling SensAble’s FreeFormTM haptic modeling system, combined with the PHANToM® device, is the first computer-aided industrial design (CAID) tool which allows designers or artists to sculpt and form virtual clay using similar tools and techniques that are employed in the physical world (Technologies 2004). FreeFormTM models can be exported as a STL file. Using FreeFormTM, primitive hole filling is possible. This volume-based hole-filling system has the following steps as shown in Figure 4-15.

Figure 4-15 Flow chart of volume-based hole filling.

124

A few examples are used to demonstrate this volume-based method. After a triangular mesh is imported from a STL file, a triangulation method is used to triangulate its boundaries, thereby filling each hole with a patch. This technique works well for simple holes in near flat surfaces as shown in Figure 4-16. However, for convoluted and multiple boundary holes it is likely to result in self-intersecting geometry as shown in Figure 4-17.

(a)

The nearly flat hole.

(b) The hole is filled by triangulation.

Figure 4-16 Hole filling using triangulation in FreeFormTM.

125

Figure 4-17 Self-intersection in complex hole filling by triangulation.

In the third step, the triangular mesh is converted into a volumetric model which is composed of voxels. In FreeFormTM the resolution of a model depends on the size of voxel. Because the model conversion will induce accuracy problem, usually high resolution is needed. However, high resolution requires high computation, thereby reducing system performance. There are several resolution levels which can be selected by a user: 0.2244 mm, 0.4489 mm, 0.8978 mm, 1.7956 mm, and custom resolution. Figure 4-18 shows the comparison of graphic rendering of volumetric models with different resolutions. It can be clearly seen from the comparison that the high resolution model can keep fine features better than the lower resolution model. If we set the custom resolution to 0.1 mm for a fighter model with dimensions of 75×90×63 mm3, then it is impossible to manipulate the model in real time in FreeFormTM. Another problem of converting the surface mesh into volumetric model is that the volumetric model might have small pieces in the area where thin features exist as shown in Figure 4-19. The debris can be removed by a user using carving 126

tools provided by FreeFormTM. However, removing the debris and recovering the thin features in FreeFormTM is not an easy job.

(a) voxel resolution: 0.4489 mm.

(b) voxel resolution 1.7956 mm.

Figure 4-18 Volumetric models with different resolutions.

Figure 4-19 A spine model with debris. 127

FreeFormTM provides various tools for intuitively manipulating virtual clay with add, carve, smooth, tug, smudge, and attract tools as shown in Figure 4-20. Using these tools the model can be modified or re-designed by a user according to the original one. Several examples of redesign or modification are given in Figure 4-21.

Figure 4-20 Tools of FreeFormTM.

(a) A fighter model is imported into FreeFormTM

128

(b) The repaired model.

(c) a snoopy model is imported in FreeFormTM

129

(d) the model is repaired and re-designed.

Figure 4-21 Models modified and re-designed in FreeFormTM.

Before exporting the modified model, we must reduce the model size. In Table 4-1 we list the data of model sizes with different voxel resolutions. The original model has 21,753 triangles. Therefore, we have to reduce the model size by using a resolution of 0.4489 mm multiplied by a scale factor of 0.1 to get a model size similar to the original one.

Table 4-1 Voxel resolution and its triangle number of a fighter model.

130

4.3 Conclusions In this chapter we apply a haptic interface to repair models scanned by optic digitizers using two different methods, triangular mesh-based method, and volumebased method. For the first method, before filling a complex or compound hole, a boundary edge smoothing method based on haptics is applied. A user can use stitching operation with a haptic tool to decompose a complex hole into several simpler holes. Each simpler hole is then automatically triangulated. Combining automatic and manual hole-filling methods improves the system’s flexibility, robustness and effectiveness. A sculpting operation based on Loop subdivision is developed in order to compensate defects of automatic methods. For the second one, we exploit a commercial physical-based 3D modeling tool, FreeFormTM, to fill holes and gaps of scanned models. These two methods have their own advantages and disadvantages as listed in Table 4-2. It should be noticed that the triangular mesh-based method is applicable to objects with small or medium sizes of triangle numbers. If the triangle number is too large, the program cannot guarantee 1 kHz haptic update rate thereby inducing haptic instability. One solution might be to separate holes and boundaries from the other parts of the model. The intact parts are shown only for graphic display; and the holes and boundaries can be physically touched and manipulated by a user.

For the

volume-based method, because it needs model conversion, models with large number of triangles can be processed. On the other hand, the conversion can raise model accurate problem.

131

triangular mesh-based model size accuracy operation

volume-based

small, medium or large

small or medium

no change to the part

reduced

intuitive to manipulate points and triangles; for small triangles, it might be tedious

easy to modify models with various tools; for thin features, it needs to deal with debris

Table 4-2 Comparisons of two hole-filling methods.

132

CHAPTER 5. HAPTIC-AIDED VIRTUAL MACHINING

In this chapter, we will present three virtual machining operations based on haptic techniques, virtual turning, grinding and drilling. We will mainly focus on part modeling and haptic rendering. In virtual turning and grinding simulation, a NURBS deformation algorithm is proposed for workpiece modeling which is expected to accelerate haptic and graphical rendering. For grinding operation, a simple NURBS trimming technique is used. In a simulation, a user can manipulate a virtual tool to cut or polish a virtual workpiece with force feedback as if he/she is working on a real turning machine. The force feedback varies with different simulation parameters such as cutting depth and material properties (e.g. hardness, surface friction and damping coefficients), which can be set by the user before operation. During operation, tool positions and their corresponding forces are recorded in order to provide data for analyzing the performance of tool paths. Vibration, friction and viscous effects felt by a user can improve the simulation fidelity. The proposed system provides an intuitive way not only for the modeling of three dimensional revolved objects but also for the training of lathe machining and grinding operations. Several examples in creating parts of different shape will be given to illustrate the easiness and effectiveness of the system. In the virtual drilling system, a user can manipulate a virtual drill tool to freely drill a part with force and torque feedback based on the haptic rendering algorithms as have been explained in section 3.2 of Chapter 3. A hybrid model combined with volumetric and polygonal parts is proposed for realistic haptic rendering and efficient graphic rendering. The haptic rendering for drilling process is based on volumetric data with high resolution and multi-point collision detection as has been elaborated in 133

section 3.3 of Chapter 3. Marching Cubes algorithm is locally applied to contour volume data in real time in order to accelerate graphic rendering speed.

5.1 Haptic-aided virtual turning Turning operation involves a specially shaped tool exerting a concentrated force on the work material to modify shape, dimension and surface roughness. The objective of this physical process is to remove excess material from an oversized workpiece. Even in modern industries, machining technologies such as that for turning operation are widely used in shop floors. However, machines for such operations are both expensive and difficult to operate. The rapid increase in computing power and computational methods has paved the way for increasing utilization of virtual reality (VR) technology which provides an economic and efficient tool for small-to-medium enterprises (SMEs) to survive in increasingly competitive manufacturing industries (Zorriassatine, Wykes et al. 2003). VR applications in manufacturing have been classified into three groups; manufacturing processes, operations management, and design. Virtual machining (VM), simulating important manufacturing processes such as turning, milling, drilling, and grinding, etc has been studied over the years (Mujber, Szecsi et al. 2004). Previous VM research was mainly focused on studying the factors affecting the quality, machining time of the material removal process as well as the relative motion between the tool and the workpiece (Zorriassatine, Wykes et al. 2003; Mujber, Szecsi et al. 2004). The technology of VM has been used in various applications for different purposes. Zhuozhi et al. (Zhuozhi, Shengyi et al. 1998) studied on a virtual machining process to simulate the machining process and to predict problems such as collisions during machining. Mayr and Heinzelreiter (Mayr and Heinzelreiter 1991) used a 134

spatial enumeration representation to model and simulated the Robotic NC machining process in order to verify the reliability of NC programs. In (Ruspini, Kolarov et al. 1997), components of a virtual machining system for evaluating and optimizing cutting performance in NC machining were presented. Prediction of cutting forces over a wide range of cutting conditions, the surface form error and transient cutting simulations were proposed in the paper. A new application framework “enhanced virtual machining” was developed by Lin and Shen (Lin and Shen 2004) to quantitatively predict the part geometry errors. Wang et al. (Wang, Wang et al. 2002) proposed an illumination model to build a realistic turning machining scene such as chip formation during machining operation. Haptic rendering in milling and grinding operations were also reported in (Chang 2002) and (Balijepalli and Kesavadas 2003) respectively. Training is a major field of application for VR technology. Haptic feedback can recreate the realistic sensations of manipulating tools (Burdea 1996; Crison, Lecuyer et al. 2005). Crison et al have recently proposed a Virtual Technical Trainer (VTT) based on interactive multi-sensory manipulation of a cutter of a virtual milling machine, with visual, audio and haptic (force) feedback (Crison, Lecuyer et al. 2005). In their system, the trainee can use the cutting tool to carve the material part based on the simplified deformation algorithm which is close to dexels-based ones. In volume representation, accuracy and speed is always irreconcilable. Time is demanding in haptic computation. Hence, instead of using voxel based algorithms, a NURBS deformation algorithm is adopted in our system for better accuracy and higher resolution. Although the PHANToM® device of SensAble Technologies® used in this system is not a perfect tool for metal machining training due to the lack of 135

ergonomical constraints (Mellet-D' Huart, Michela et al. 2004), several haptic features such as friction, viscous and force feedback effects are added into our system to better reflect the machining process. These effects can add values to the machining simulations (Balijepalli and Kesavadas 2004). A haptic virtual turning operation system (HVTOS) as shown in Figure 5-1 is proposed for VM in this paper. This system features the user participation with a multi-sensory (e.g. haptic, audio and visual) feedback. Various kinds of revolution models can be created virtually by HVTOS in an intuitive, easy, fast and economical way. It also provides a useful tool for turning operation training. For training purpose the design of HVTOS is based on the following pedagogical hypotheses: Interactive manipulation and multi-sensory stimulation can improve the learning process; Haptic feedback can improve the understanding of both the turning operation and the relationship among mechanical parameters; It enables savings on training costs.

In a training simulation, the trainee can manipulate a virtual cutting or grinding tool to cut or polish a virtual workpiece with force feedback, which varies as a function of different simulation parameters such as cutting depth and material properties thereby helping trainees better understand the turning operation process. Machining parameters such as turning speed, size of primitive workpiece and material properties can be set from a dialogue box as shown in Figure 5-1. When the virtual machining process is finished, a model of revolution represented as NURBS is obtained. Vibration, friction, viscous effects and sounds that improve the simulation fidelity are also implemented. 136

Figure 5-1 The HVTOS system setup.

5.1.1 Tool modeling When a user selects the cutting tool, cutting operation will only take place when the tool is placed on the work plane that is tangible. Once the tool is placed on the plane, it will be kept there unless a force is applied to move it away. The work plane is a horizontal plane that passes through the centre axis of the stock as shown in Figure 5-2. It is automatically defined when a stock is chosen. When the user moves the cutting tool, only the position of the tool tip is updated in servo loop at the rate of about 1000 Hz to detect whether there is a collision with the workpiece. If a collision is detected the excess material will be removed from the workpiece in the form of the layer chip.

137

Figure 5-2 Software interface of HVTOS.

The workpiece rotates at a speed,

, in revolutions per minute set by the user

through the dialogue box. In the cutting process, the angular velocity will affect the force magnitude and vibrating frequency feedback to the user. The cutting force model has been presented in section 3.1.3 of Chapter 3.

5.1.1.1 Grinding Tool Rendering

The shape of the grinding tool is a disk whose proxy is based on the collision of a rectangle other than a point as shown in Figure 5-3. In collision detection, the system will update the position of the rectangle which is through the axes of workpiece and tool, and judge whether it collides with the workpiece. If there is an intersection between the rectangle and a triangle Ti of the part, the intersection depth di is calculated according to the intersection points a, b and the triangle normal Ni

(Figure 5-3(b)). 138

di = cd sinα

5-1

where c is the middle point of a and b, line segment cd is on the rectangle and perpendicular to the tool axis;

α

is the angle between cd and the triangle Ti. The

average intersection depth d is the average of all the intersection depth of the n intersected triangles of the workpiece:

d=

n i =1

5-2

di / n

where d is considered the grinding penetration depth. The force magnitude is calculated with the following equation:

F = kd − ζ V

5-3

where k is a stiffness constant, d is the grinding depth, ζ is a damping constant and V is the velocity of grinding tool. The force direction is calculated with the average normal N of all intersected triangle normals: n

N =

i =1

Ni / n

5-4

The grinding effect is expressed by variation of degree of finish. Since the workpiece is modeled by NURBS as mentioned in next section the polish effect rendered on part of a NURBS surface is a tricky problem. A NURBS trimming technique is applied in this system to convey the grinding effect. The whole workpiece is trimmed into N (determined by grinding tool width) segments in advance. During grinding process, the system will detect which segment is in touch with the grinding tool and the degree of polish of corresponding segment will be changed as shown in Figure 5-4 where the more shining part is the surface that are already grinded. 139

(a)

(b) Figure 5-3 Grinding tool collision model.

Figure 5-4 Grinding rendering.

140

5.1.2 Workpiece rendering In solid modeling, the most widely utilized representation methods for 3D objects are constructive solid geometry (CSG) and boundary representation (B-Rep). B-Rep supports a variety of mathematical surfaces including Bezier, spline and NURBS (non-uniform rational B-spline), etc. They have gained popularity and become a standard tool in industry of geometric modeling because they provide a common mathematical form for analytical geometry and free-form curves and surfaces (Piegl and Tiller 1997). In this section, B-Rep based on NURBS is used to construct the solid model of a part other than volume representation in that real time haptic rendering and high resolution part representation can not be achieved in volume representation due to large computation. In a haptic system, real time is critical and it is fast and flexible to modify shape represented by NURBS, especially revolution NURBS surfaces. Hence, in the presented haptic virtual turning operation system NURBS is used for part modeling.

5.1.2.1 Revolution shape modeling

A general form to describe a parametric surface in the 3-D space can be expressed as:

(0 ≤ u, v ≤ 1)

S(u,v) = x(u,v) i +y(u,v) j + z(u,v) k

5-5

The NURBS surface representation is given by (Piegl and Tiller 1997). n

S (u , v ) =

m

N i , p N j , q wi , j Pi, j

i = 0 j =0 n m

i =0 j =0

= N i , p N j , q wi , j

n

m

i =0 j =0

Ri , j (u , v ) Pi, j

141

5-6

Ri , j (u , v) =

N i , p N j , q wi , j n

m

k =0 l =0

5-7

N k , p N l , q wk ,l

where Ri,j(u,v) are called the rational basis functions; Pi,j are the 3-D control points; wi,j are the corresponding weighs of Pi,j; n and m are the numbers of control points in

the u and v directions, respectively; Ni,p(u) and Nj,q(v) are called non-rational B-spline basis functions defined on the knot vectors (Agus, Giachetti et al. 2003), where p and q are their associated orders, respectively. The recurrence equations for computing Ni,p(u) are shown in the following equations, and Ni,p(v) is calculated in a similar way.

N i ,1 ( u ) = N i, p =

1

for ui ≤ u ≤ u j

0

otherwise

5-8

u − ui u −u N i , p −1 + i +1 N i +1, p −1 ( u ) u i + p −1 − u i ui + p − ui +1

U = {0, ,0 ,up+1 , ,ur − p−1 ,1, ,1} p+1

p+1

V = {0, ,0 ,uq+1 , ,us−q−1 ,1, ,1} q+1

q+1

where r = n+p +1 and s = m +q +1. A surface of revolution is generated by revolving a given curve about an axis. The given curve is a profile curve while the axis is the axis of revolution. The section profile curve of revolution surface is a circle. A circle can be constructed by 9 control points with degree 2 NURBS. U denotes knots vector; P is control points vector and W is weights vector.

142

The primitive part shape is a cylinder, which can be formed through revolting a rectangle around an axis as shown in Figure 5-5. The profile curve is also 2 degree and multiple control points are used to shape edge. If three control points are co-linear, then the NURBS curve between these points is a line segment. Multiple control points are put at the rectangle corner to get shape turn. The control points on the line which is horizontal with the axis are positioned evenly. The distance r between the control points determines the system cutting resolution. Since a NURBS curve is smooth, to simulate cutting process vividly the distance r should be small enough.

Figure 5-5 Revolution surface.

5.1.2.2 Model shape modification

In the cutting process, the profile of a workpiece will change dynamically corresponding to the cutting tool motion. Shape modification of NURBS objects is adopted in this virtual system. It can be achieved by means of knot values, control points and weights. Shape modifications based on knot values is an unexplored field as yet. Piegl (Piegl 1989) discussed control point-based shape modification of NURBS curves. Fowler and Bartels (Fowler and Bartels 1993) proposed a method for the shape modification of splines with an arbitrary basis function, based on the 143

repositioning of control points. Au and Yuen (McNeely, Puterbaugh et al. 1999) and Diego and Touradj (S.C. and E. 2002) presented shape modifications achieved by the simultaneous modification of control points and weights. Constrained deformation (Celniker and Welch 1992) and physically based modeling approach (Celniker and Gossard 1991) for surface modification were proposed. However, they are highly computational and not suitable in our situation, in which real time haptics is critical. Control point repositioning method is adopted in this virtual system. The following equation shows the equation for computing control point translation.

Pˆk = Pk +

5-9

dV V Rk , p (u )

When the cutting tool contacts a curve point says P (from which the system computes), we can capture the SCP (surface contact point). Then the user moves the haptic device end-effector to point Pˆ , from which we can get V as shown in Figure 5-6. It remains for the system to choose a control point Pk, to be translated in order to produce the desired movement of P. In general there are up to p+1 candidates, but it is usually desirable to choose Pk so that Ri,j(u) is the basis function whose maximum lies closest to u (Piegl 1989).

Figure 5-6 Control point repositioning.

144

In Eqn.5-9, the direction can be calculated through querying the current and the last proxy positions. The magnitude d is calculated with the following equation: 5-10

d = Fr / κ

where κ

is the stiffness constant and Fr is the radial force, which can be obtained

through querying haptic device force in client thread. With Eqn.5-9 and Eqn. 5-10 the part profile curve can be updated.

5.1.3 System implementation A prototype haptic virtual turning operation system (HVOTS) based on the proposed methods has been implemented. In our implementation, the haptic and visual rendering machine is a desktop PC with dual 2.2 GHz CPU and 1.0 GB RAM. The haptic device in this system is a PHANToM® Desktop with 6 degrees of freedom (DOF) of position sensing and 3 DOF of force feedback. The software is written in VC++, OpenGL and 3D Touch™ OpenHapticsTM HLAPI for graphic and haptic rendering. HLAPI is a high-level C API for convenient haptic rendering such as rendering vibration, friction and viscous effects. The HVOTS provides a convenient interface for a user to set operation parameters and the virtual environment. Texture mapping technique will be applied for background visualization to improve the system realness in the future. The audio rendering thread for playing cutting and grinding sound is triggered if the tool touches the workpiece during the machining process, the volume of which is proportional to the cutting force magnitude. Some models machined by HVOTS are given in Figure 5-7. From operation experiences, it is shown that the system is intuitive and easy to use for shape modeling and lathe operation training as the user can see, feel and hear what shape is being 145

produced in the virtual machining process. It can also be seen from Figure 5-7(c) that the surfaces after grinding are more shining.

Figure 5-7 Models machined by HVOTS: (a) wine glass; (b) ball; (c) gourd; (d) weight.

5.1.4 Discussions and Conclusions This section has elaborated in detail a Haptic Virtual Turning Operation System (HVTOS) for virtual machining and its pedagogical applications. Two kinds of tool models for haptic feedback are proposed in this virtual turning operation 146

system. NURBS deformation algorithm is proposed for workpiece modeling in order to accelerate haptic and graphical rendering. A NURBS trimming technique is adopted to simulate grinding process. Vibration, friction and viscous effects are also added through HLAPI to improve the simulation realness. Multi-sensory feedback creates a better sense of immersion in turning simulation. During turning operation, forces can be recorded to provide data for analyzing the trainee’s performance. Figure 5-8 shows a typical recorded data set of radial force Fr exerting on the cutting tool. Evaluation of the effectiveness and user friendliness of

HVTOS will be carried out in the future. Training objectives such as product error and tool wearing effect need to be considered as well. Modeling of cutting chip formation in the operation can increase the actual immersion.

Fr(N)

time(s)

Figure 5-8 An example of force curve for evaluation.

147

5.2 Haptic-aided virtual drilling Drilling is widely used in manufacturing process. A framework is presented for simulation of drilling process with force and torque feedback in this section. One of the challenging issues in drilling simulation is to find an efficient method to display in real-time dynamic models. A hybrid model combined with volumetric and polygonal parts is proposed for realistic haptic rendering and efficient graphic rendering. Haptic rendering is another challenging problem since drilling process involves complicated behaviors which need to be further studied. The haptic rendering for drilling process is based on volumetric data with high resolution and multi-point collision detection, which have been presented in section 3.3 of Chapter 3. Marching Cubes algorithm is locally applied to contour volumetric data in real time in order to accelerate graphic rendering speed. The proposed system can be used for simulation of drilling process in industry as well as medical bone drilling. In the proposed system, a user presses a button on the stylus of the haptic device to turn on the drill and then he/she pushes forward the drill tool towards the part which is being drilled. In the simulation, the multi-sensory feedback includes not only touch and vision but also audio effect. In order to create a better sense of immersion, drill sound is played during drill simulation, the volume of which is proportional to the magnitude of thrust force. If the drill touches the part, the user can feel vibration through the haptic tool.

5.2.1 Part modeling In the drilling operation, when the drill penetrates the part, the drill bit is confined to a small volume that just encloses the drill bit. Hence, we propose a hybrid

148

model that combines voxel and triangular facet for part representation so that efficient haptic and graphic rendering can be achieved. In order to get a high resolution volumetric model for stable haptic rendering, a voxelization method is applied to convert the original polygonal part model into a volumetric one based on Algorithm 5-1. Our voxelization method is similar to (Thon, Gesquiere et al. 2004). The method is implemented in three steps: first, to speed up collision detection a space partitioning of the part model is done; second, a ray-casting method is adopted to get dexel arrays; third, to each dexel, we calculate the voxel array. The antialiasing technique (Thon, Gesquiere et al. 2004) is applied to the third step. The ray-casting method is depicted in Figure 5-9(a). For an example in Figure 5-9(b), three green rays are cast along the direction of axis Z. They intersect with a nut model at red points. A hierarchical tree structure is used to organize the object which is being voxelized in order to accelerate the collision detection in the ray-cast calculation. A visualization toolkit (VTK) (Kitware) is used in our algorithm. vtkOBBTree is a class of VTK to generate an oriented bounding box (OBB) tree which is a hierarchical tree structure of oriented bounding boxes. An oriented bounding box is a bounding box that does not necessarily line up along coordinate axes. The deeper levels of OBB confine smaller regions of space. A recursive, top-down process is used to build an OBB tree of an object. First, the root OBB is constructed by finding the mean and covariance matrix of the cells that define the dataset of the object. The eigenvectors of the covariance matrix are extracted, giving a set of three orthogonal vectors that define the tightest-fitting OBB. To create the two children OBBs, a split plane is found that approximately divides the number cells in half. Then, they are assigned to

149

the children OBBs. This process continues until the maximum level limits the recursion, or no split plane can be found.

Algorithm 5-1 Model voxelization.

(a)

(b)

Figure 5-9 Ray-casting for model voxelization.

In order to illustrate our method, a nut model with 1832 triangles as shown in Figure 5-10(a) is voxelized with different resolutions. The voxelized models are rendered by using vtkContourFilter, a class of VTK which is a filter that takes as input 150

any dataset and generates on output iso-surfaces. The results are shown in Figure 5-10(b) to (e), the voxel resolutions of that are 0.267 mm, 0.16 mm, 0.114 mm, and 0.089 mm respectively. We also compare the computation efficiency with different resolutions as shown in Table 5-1. From this table, we can see that the computation time (117 second) of this method can meet our requirement even the high resolution (216×216×93) is used for model voxelization.

Table 5-1 Computation time with different resolutions.

(a)

(b)

151

(c)

(d)

(e) Figure 5-10 Model voxelization method.

5.2.2 Graphical rendering Volume visualization is used to create images from scalar and vector datasets which are defined on multiple dimensional grids. The volume rendering methods can be classified into two categories: direct volume rendering algorithms and surfacefitting algorithms. Direct volume rendering methods map data directly onto screen space without using geometric primitives as an intermediate representation. Surfacefitting methods are also called feature-extraction or iso-surfacing. They fit planar 152

polygons or surface patches to constant-value contour surfaces. Surface-fitting methods are usually faster than direct volume rendering methods because they can use conventional rendering methods to display images. A volumetric sphere model with size of 95.9 Kbytes is displayed with these two methods as shown in Figure 5-11. Figure 5-11(a) is displayed with the first method; Figure 5-11(b) is displayed with the second method.

(a) direct volume rendering method;

(b) surface fitting method.

Figure 5-11 Volume rendering methods.

Due to the requirement of real-time haptic rendering and high accuracy graphic rendering, a Marching Cubes algorithm (Lorensen and Cline 1987) is used locally to extrude the volumetric model dynamically and efficiently, similar to (Eriksson, Flemmer et al. 2005). The Marching Cubes algorithm as a surface-fitting method is the most popular volume rendering algorithm. It is a very efficient surface rendering method that uses the voxel density values to perform a high quality visualization of the surface. Similar to the work based on octree data structure method in (Peng, Chi et al. 2003), Marching Cubes algorithm is applied to each block data 153

and the results are combined to represent the whole part. Figure 5-12 shows the methods of locally updating volume using Marching Cubes algorithm. We use a powerful visualization toolkit, VTK (Kitware) to display the volumetric model. First, using the class, vtkImageImport, we import the volumetric model into the image data structure from the file which is produced by our voxelization method as mentioned before. Then, we use the class, vtkExtractVOI to extract dataset for n blocks of subvolumes. Each sub-volume has its own rendering pipelines as shown in Figure 5-12(b).

In

the

rendering

pipeline,

taking

advantage

of

the

class,

vtkImageMarchingCubes, we specify a contour value to generate the iso-surfaces which will be assembled into triangle strips by means of using the class of vtkStripper. Next, the class, vtkPolyDataNormals is used to compute point normals. It works by determining normals for each polygon and then averaging them at shared points. When sharp edges are present, the edges are split and new points generated to prevent blurry edges. Then, we exploit vtkPolyDataMapper to map the polygonal meshes to graphics primitives which are presented by vtkActors. Lastly, we check whether the sub-volume needs to be updated or not. If it needs to be updated for the reason that its volumetric data is modified by drilling as shown in Figure 5-13, the process will follow the steps as mentioned above. Otherwise, we use vtkRenderer to convert geometry, a specification for lights, and a camera view into an image for graphic rendering.

154

(a)

(b) Figure 5-12 Flow chart of volume rendering methods using local Marching Cubes algorithm.

155

Figure 5-13 Schematic of local Marching Cubes algorithm.

For realistic visual presentation of the material removal process, we use the method like the one proposed in (Wiet, Stredney et al. 2002) in which the density value of each voxel being manipulated by the drill head is decreased when the distance of the voxel from the drill head centre is less than the head radius as has been explained in section 3.3.3 of Chapter 3. At the same time the Marching Cubes algorithm will be applied to these locally modified volumetric data at each frame in the graphics loop as shown in Figure 5-14.

(a) 156

(b) Figure 5-14 Local volumetric data are modified and Marching Cubes algorithm is used for efficient graphic rendering of drilling.

5.2.3 Implementation The current experimental system is configured as follows: Dell® precision PWS670; a dual-processor Xeon (TM) 2.8 GHz with 1GB RAM; NVIDIA Quadro graphic card; Microsoft Windows XP operation system; a Phantom® Premium 1.5/6 DOF device. VTK (Kitware®) is used for graphic rendering and OpenHaptics (SensAble®) API for haptic rendering. Figure 5-15 shows the software structure of the haptic-aided virtual drilling system. Because haptic rendering requires high refresh rate at 1000 Hz, we use two threads in our programming, graphic and haptic threads as shown in Figure 5-16. The graphic thread runs at 30 Hz. The collision detection and volume data modification take a lot of computation resources. Therefore, we put them in the graphic threads. These two threads must be synchronous. The haptic thread reads volumetric data information, e.g. the number of voxel being removed at a high rate for haptic

157

rendering. While the graphic threads reads the drill tool positions for collision detection.

Figure 5-15 System software structure.

Figure 5-16 Flow chart of program.

158

5.2.4 A case study of bone drilling simulation Based on the above methods, as a special case of drilling simulation in virtual manufacture, a skull bone drilling simulator as shown in Figure 5-17 is preliminarily implemented. Bone drilling is a necessary step prior to insertion of pins and screws during many operations such as craniotomy and orthopedic operation. It requires high skills to prevent damage of soft tissues and superfluous heat generated by friction between drill bit and bone. The purpose of the presented system is to develop a bone drilling simulator that can be used for training. The drilling simulation system may allow the students or novice surgeons to perform drilling operations in a virtual environment, thus saving the cost and time of the training.

Figure 5-17 Bone drilling simulation setup.

159

In the simulation process, a user presses a button on the stylus of a haptic interface as shown in Figure 5-17 to turn on the drill and then he/she pushes forward the drill head to drill the skull bone in the specified area. If the drill head touches the bone, forces, torques and sounds will be conveyed to the user and he/she can also see bone debris accumulates. The user can feel vibration through the haptic tool. Forces and torques vary with the drill rotation speed and angular velocity. In a skull bone drilling operation, preventing damage of soft tissue is a basic demand. For free-hand bone drilling this is more important. The trainee is trained to recognize the moment when the drill bit just penetrate the skull bone and stop drilling quickly. In the skull bone drilling, the trainee is supposed not to damage the soft tissue. Therefore, the skill of recognizing drill-end during operation is needed for surgeons. Another problem of a bone drilling operation is heat generation. The friction that occurs during a bone drilling process produces high temperature which may cause irreparable damages to the bone. Therefore, applying constant and nonexcessive thrust force and feeding rate is a crucial skill for surgeons to perform proper bone drilling. A preliminary evaluation of training method is proposed for skull bone drilling simulation based on our methods. The drilling path is traced in real time. If the drilling path deviates from the required one, then the simulator will warn the trainee. Brain tissue is simply modeled by a sphere. After the trainee penetrates the skull bone and if he/she pushes the drill head further into the sphere which is represented for brain tissue, the simulator will give a message to remind the trainee that he/she fails at this time. The forces, torques and drill head positions are recorded during operation so that the performance of the trainee can be analyzed.

160

5.2.5 Discussion and conclusions In this section, a framework and a prototype system implementation of hapticaided virtual drilling simulation are presented. The part model that encloses the drill tool is represented as volumetric data which is generated by a voxelization method. The volumetric model is graphically displayed using an efficient graphic rendering method based on a local Marching Cubes algorithm. Implicit function and hierarchic data structure are used to speedup collision detection. We investigate the relations between drilling force and different parameters, such as model resolution, cutting conditions and tool specifications. Based on these methods, a prototype system is developed. Based on the prototype system of haptic-aided drilling simulation, a user can feel multi-sensory feedback including touch, vision, and audio effects. If the drill touches the part, the user can not only feel forces and torques, but also can feel vibration through the haptic tool. To simulate realistic debris accumulation in the process of drilling is a challenging issue. For the time being, the chip is simply simulated by some irregular particles, the paths of which are calculated by integration. The visual effect could be improved by using a more realistic debris accumulation model.

161

CHAPTER 6. HAPTIC-BASED VIRTUAL TELEOPERATION

Tele-operation systems as depicted in Figure 6-1 are used to remotely manipulate robots working in the hazardous environment which is inaccessible to an operator. The remote robot is controlled by the operator through joysticks or steering wheels with the guidance of video feedback from remote environment. The main disadvantage of video feedback is that poor depth cues and image quality due to bad communication or lighting conditions. And the physical controller is insufficient in that the operator’s improper manipulation possibly incurs damage to the robot without noticeable perception of the operator. However this control in tele-operation can be enhanced by technologies of virtual reality (VR) by means of physics-based modeling of remote environment and haptics to improve operator’s perception and precision. Haptic tele-operation allows a user to remotely control a slave robot with a master haptic interface while feeling force and torque feedback from the remote environment (Chen, Xu et al. 2005). Typical applications of haptic tele-operation are mobile robot navigation (Diolaiti and Melchiorri 2002; Lee, Sukhatme et al. 2002) and robot-aided tele-surgery (Turro and Khatib 2001). Virtual haptic tele-operation is beneficial when the operator is learning to perform dangerous tasks, e.g. working with radioactive or explosives materials. Therefore, in this chapter a virtual tele-operation system is proposed based on physical robot modeling, robotic manipulation, and 6-DOF haptic rendering methods. The 6-DOF haptic rendering method has been explained in section 3.2 of Chapter 3. The rest of the chapter will be focused on physical robot modeling and manipulation methods. 162

Figure 6-1 Schematic of haptic tele-operation system.

Figure 6-2 shows the structure of our virtual haptic tele-operation system. The haptic thread which runs at 1 kHz is decoupled from simulation thread which runs at graphic refresh rate. This technique makes haptic rendering independent of complexity of collision detection and response. We use Open Dynamics Engine (ODE) (Smith 2004) as the module for physically modeling a 6-DOF robot as will be explained in section 6.1 and rigid body dynamic simulation as has been introduced in section 3.2 of Chapter 3, which provides collision detection and response. Virtual joint coupling as has been explained in section 3.2.3 of Chapter 3 is used for calculating force and torque for virtual robot and haptic rendering, which links the joints of the robot and haptic interface with virtual springs and dampers. The virtual robot manipulated by a user can be used to perform various tasks such as assembly and path planning in complex virtual environments.

163

Figure 6-2 Block diagram of virtual tele-operation system structure.

6.1 Modeling a robot

6.1.1 Geometrical modeling 6-DOF articulate robot is commonly used in industry e.g. ABB robot (Global Robots Ltd.). In the simulation, the simplified model parts are designed in Solidworks® (Figure 6-3(a)) and then assembled at robot joints into our virtual robot as shown in Figure 6-3(b). After that, the assembled model is exported to several STL files, each of which contains the geometric information and position of robot part. At the same time from Solidworks® the joint constraints are also exported, which are used to build robot joints and links in our system.

164

(a)

(b) Figure 6-3 Robot geometric modeling.

165

6.1.2 Physical modeling With the model parts and constraints from Solidworks® a physically based robot model is built up with ODE (Smith 2004). Each model part is assigned with a mass in terms of its volume and density and corresponding inertia is calculated. The robot base is fixed on the ground. A mobile robot can also be built by adding wheels which can be controlled by a user to the base. There are six hinge joints which link the robot parts as shown in Figure 6-4. For each hinge joint several parameters are assigned such as the orientation and position of hinge axis, maximum and minimum rotation angles and working torque. The joint works in the rotation range defined by the maximum and minimum rotation angles and the link will rotate along the joint axis as long as the torque exerted on this joint is bigger than the working torque.

Figure 6-4 schematic of six joints of robot.

In the virtual environment the virtual robot is constrained by contact torque Tp, manipulation torque Tc and joint torque Tj. Tp prevents robot from penetration of other objects and Tc drives the robot to move towards the transformation of the haptic interface. When the robot contacts other objects, collision points, normals and depths 166

can be obtained and the corresponding torque Tp to the mass center of contact part can be calculated. Joint torque Tj constrains the robot links rotate around the joints. The robot dynamics is regulated by the following equation (Smith 2004):

M rθ r (t ) + Crθ r (t ) = Tc (t − T ) − Tp (t ) − T j (t )

6-1

where T is the time delay of communication.

6.1.3 Robot kinematics A commonly used convention for manipulator frames of reference in robotic applications is the Denavit-Hartenberg (D-H) notation (Denavit and Hartenberg 1955). In this convention, each homogeneous transformation, relating frame i to frame i-1, is represented as:

i −1

Ti =

cθi

− sθi cα i

sθi sα i

ai cθi

sθi

cθi cα i

−cθi sα i

ai sθi

0 0

sα i 0

cα i 0

di 1

where the four quantities i, ai, di,

i

6-2

are link length, link twist, link offset, and joint

angle respectively, associated with link i and joint, and c and s are abbreviations of trigonometric functions cosine and sine. If the robot has n links, then the homogeneous transformation relating the tool frame n-1 to the base frame 0 is given by:

Tn−1 = 0T1 ⋅ 1T2 ⋅

0

⋅ n −2Tn −1

6-3

Table 6-1 shows the configurations of D-H notation for the 6-DOF virtual robot used in this research. These parameters are obtained from the CAD model of the robot, which can be exported to the robot controller for kinematics calculation. 167

Table 6-1 The D-H frame parameters of the virtual robot links.

The forward kinematics model defines the relation (Smith 2004): 0

Tn = G (q)

6-4

where 0Tn is the homogeneous transform representing the positions and orientations of the manipulator tool (frame n) in the base frame 0. The inverse kinematics (IK) model is defined by:

q = G −1 (0 Tn )

6-5

In general, this equation allows multiple solutions. The manipulator Jacobian defines the relation between the velocities in joint space q and in the Cartesian space X : X = J (q )q

6-6

or the relation between small variations in joint space δ q and small displacements in the Cartesian space δ X :

δ X ≈ J (q )δ q

6-7

Two IK models are available in (Roboop 2002). The first one (IK1) is based on Jacobian pseudo-inverse method.

168

δ q ≈ J +δ X

6-8

where J+ is the pseudo-inverse of Jacobian. The second one (IK2) is based on the following Taylor expansion:

Tn (qˆ ) = 0Tn (q + δ q) ≈ 0Tn (q) +

0

δ 0Tn δ qi i =1 δ qi n

Tn'δ q ≈ Tobj − 0Tn (q )

0

6-9

6-10

δ q ≈ ( 0Tn') + (Tobj − 0Tn (q ))

6-11

where Tobj is the desired position represented by the homogeneous transform, 0Tn' is the partial derivative of the homogeneous transform, and ( 0Tn') + is its pseudo-inverse. In order to evaluate the performances of these two methods, the IK problem of the 6-DOF robot is solved by IK1 and IK2. The accuracies of IK1 and IK2 are at the same order of 0.001 rad. However, IK1 runs much faster than IK2. On average, it costs IK2 1.54 milliseconds to solve the problem, while IK1 only needs 0.58 milliseconds. Hence, IK1 is adopted in our system since time is critical in real time virtual robot manipulation.

6.2 Virtual tele-operation When a virtual robot is haptically manipulated by a user, the tool frame at the robot end-effector must be kept consistent with that of the haptic interface. If the virtual robot has 6 DOFs (Figure 6-4), there are two means to control the virtual robot through the 6-DOF haptic interface as depicted in Figure 6-5. The first one is that the robot joints and the joints of the haptic interface are directly linked (Figure 6-6(a)).

169

Because the ranges of their joints are different, a mapping method from the haptic configuration to the robotic configuration must be applied.

Figure 6-5 Schematic of the haptic interface.

However, this method cannot be used to other robots with different DOFs. Therefore, a more general manipulation method based on inverse kinematics is proposed. In this mode, the tool center point (TCP) of a robot is manipulated by the haptic interface point (HIP) as shown in Figure 6-6(b). In the simulation loop, firstly, the system traces the transformation of the haptic interface (Xh) in haptic workspace that is the physical space reachable by the haptic device. Then, it converts the transformation to graphics scene (Xm). Defining a mapping between the haptic workspace and the graphic scene will describe how movement of the haptic device translates to movement in the graphic scene.

170

Figure 6-6 Two methods of virtual tele-operation.

The transformation, mTh transforming the haptic pose (position and orientation) to pose of graphic model in world coordinates is computed by:

Th = mTc ⋅ cTh

m

6-12

where cTh is the transformation that transforms haptic coordinates to view coordinates that are the local coordinates of the camera (eye coordinates), and mTc is the transformation that transforms view coordinates to world coordinates. Next, it uses inverse kinematics (IK), as explained in the previous section, to calculate the robot configuration q.

q = IK ( X m )

6-13 171

Then, the graphic model and collision model of the virtual robot are updated according to the calculated robot configurations. If there are multiple solutions of inverse kinematics, one simple method as show in the Figure 6-7 is used to decide which one is to be selected. If the previous robot configuration is Ci-1 and there are two possible solutions, Ci1 and Ci2 , then we can check which one is closer to Ci-1. If Ci1 is, then it will be selected as the current solution. The distance, d between two configurations is calculated with the following equation:

d=

N j =1

| Pi −j1 − Pi j |

(6-14)

where Pi −j1 is the j th joint position of previous configuration, Pi j is the j th joint position of one of the current configurations, and N is the number of robot joints.

Figure 6-7 The selection method for multiple solutions of inverse kinematics.

172

6.3 Implementation The experimental system is configured as follows: Dell® precision PWS670; a dual-processor Xeon ™ 2.8 GHz with 1GB RAM; NVIDIA® Quadro graphic card; Microsoft Windows XP operation system; a Phantom® Premium 1.5/6-DOF device. Figure 6-8 shows the software structure of the system. A proximity query package (PQP) is used for collision detection. A Robotics Object Oriented Package (Roboop) is used for robotic inverse kinematics. OpenHaptics (SensAble®) APIs are used to render forces and torques in the fracture reduction process. OpenInventorTM, based on OpenGL® is utilized for graphic rendering.

Figure 6-8 Software structure.

In next sections, we will present several case studies based on Haptic-aided Virtual Tele-operation (HAVTO) as elaborated in the previous sections. The haptic virtual bone fracture reduction system is also depicted in detail in section 6.6 as a 173

special case of HAVTO. The reason why it is presented in this thesis is that its reduction operation might be seen as an operation of virtual assembly and the same method as mentioned in the above sections is used.

6.4 Haptic-aided path following with virtual constraints It was reported that nearly 90% of the robotic operation time in some teleoperation was spent in tool alignment (Kang, Park et al. 2004). And tool alignment is critical in some operations, e.g. metal cutting. However, performing tool alignment operation is difficult for an operator because current tele-operation system is far from perfect, e.g. poor visual depth feedback and no haptic cues. Though haptics can be integrated into tele-operation system, the efficiency of tool alignment is still not significantly enhanced. The method of virtual fixture was proposed (Kang, Park et al. 2004) to improve the efficiency of tele-operation. Virtual fixture generated in simulation is used to guide the haptic interface for facilitation of operation such as tool alignment. Therefore, the purpose of the path-following experiment is to demonstrate the usefulness of virtual reality for tele-operation by combination of 6DOF haptic rendering and virtual constraints (fixtures). In the path-following simulation, a user manipulates the robot tool with the haptic interface as shown in Figure 6-9 to slide on the surface of a part along a curve. When the tool tip touches the surface, ODE (Smith 2004) tries to prevent it from penetration. Several experiments are carried out to illustrate the functions of haptics and virtual constraints in tele-operation. In the first experiment, the user is required to follow the path without haptic feedback and only by means of visual alignment. In the second one, the haptic interface with force and torque feedback to assist the operation but without virtual constraints. In the third one, the user can feel force and torque and 174

constraints are also used to facilitate him/her to follow the path. There is a snap distance ds for the constraints. If the user moves the robot tool to the path within a distance, ds, attraction force will be exerted on the haptic interface to pull it towards the path. From the comparison as shown in Figure 6-10, Figure 6-11, and Figure 6-12 we can clearly see that haptics and virtual constraints can greatly benefit teleoperation.

Figure 6-9 Screen shot of haptic-aided path following.

175

Figure 6-10 Path following without guidance of haptics and virtual constraints.

Figure 6-11 Path following with haptic guidance.

176

target path

Figure 6-12 Path following with guidance of haptics and virtual constraints.

6.5 Haptic-aided virtual assembly The simulation of assembly and disassembly has greatly benefited from the virtual reality technology. Physically based simulation and haptics provide two powerful tools for enhancing the simulation realism and improving understanding of process and sequence of assembly and disassembly. McNeely et al. presented a voxel sampling method of 6-DOF haptic rendering for virtual assembly (McNeely, Puterbaugh et al. 1999). However, their method cannot be used to interact with dynamic objects. In the proposed system as shown in Figure 6-13, a user performs an assembly task by manipulating a virtual robot arm through a haptic device.

177

Figure 6-13 Photograph of haptic-aided virtual assembly.

In the simulation, a part such as the cylinder in Figure 6-14 is to be grasped by the robot gripper and transported to another position. The gripper is simply composed of two claws which are constrained by slider joints. By applying forces on the joints the gripper can close or open. When the user needs to close the gripper to clamp on the part, he/she presses a key to add force on the gripper. Because the friction force proportional to the normal force is also exerted during this simulation, the user can lift the part and feel gravity, inertia force and friction force from the part. If the part grasped by the user collides with other objects, the user can also feel forces and torques. Hence, the haptic feedback can assist the user to better understand the virtual environment, thereby facilitating the assembly task.

178

(a) a virtual robot with a gripper at the end of the robot arm;

(b) grasp operation;

179

(c) assembly operation;

(d) part is assembled. Figure 6-14 Screen snapshots of the haptic virtual assembly system.

Two constraints are usually used in virtual assembly, axis orientation constraint and face match constraint. In our simulation we use the technique of axis orientation constraint to further enhance the efficiency of assembly. The simulation will check the axes of the part being grasped and assembly base if they match or not. If so, then 180

the part movement is constrained by axial motion which means that the part can only move along the axis. If the part is a cylinder, it can also rotate around the axis. If the simulation detects the part moves out of the range of axis constraint, then this axis orientation constraint will be removed in the process of disassembly.

6.6 Haptic-aided virtual bone fracture reduction Intramedullary nailing as a minimal invasive surgery is currently a treatment choice of femoral shaft fractures. The fracture reduction is achieved by means of inserting an intramedullary nail from the hip into the bone' s medullary canal without surgically exposing the fracture. Afterwards distal interlocking screws are inserted to prevent bone fragment rotation and displacement. It provides stable fixation and reduces further damage to the traumatized area, thereby maximizing its biologic potential for healing. However, surgeon relies heavily on fluoroscopic X-ray images to manipulate the bone fragments. Because fluoroscopic images are static, twodimensional (2D), low-resolution, and contrast-limited field of view, frequent use of the fluoroscope is necessary, resulting in significant cumulative radiation exposure to the surgeon. Fluoroscopic times average between 4 and 5 minutes, but can increase up to 30 minutes in complicated cases (Hazan and Joskowicz 2003). High skills are required to mentally correlate the 2D fluoroscopic images to the 3D anatomy and surgical tools, and maintain hand-eye coordination while performing surgical operation without direct visual and haptic feedback. This tough procedure can cause improper positioning and alignment with a rotation error of more than 15° (Hazan and Joskowicz 2003). In order to overcome these two disadvantages, several researchers tried to introduce medical robots into fracture reduction to realize semi-automatic even fully 181

automatic fracture reduction (Joskowicz, Milgrom et al. 1998; Hazan and Joskowicz 2003; Westphal, Gosling et al. 2006). Medical orthopedic robotics and computeraided orthopedic surgery (CAOS) are still in the design stage. Only a few robotic devices are being allowed for clinical trials (Mukherjee, Rendsburg et al. 2005). A computer-integrated orthopedic system, called FRACAS (Joskowicz, Milgrom et al. 1998) was proposed for assisting surgeons in performing closed medullary nailing of long bone fracture. The system can reduce, or even eliminate the use of fluoroscopic X-ray images in operation, replacing them with a virtual reality view wherein the anatomy and instruments’ positions were tracked intraoperatively in real time by using a tracking system. A tele-manipulator system for the robot-assisted reduction of femoral shaft fractures was presented in (Westphal, Gosling et al. 2006). The tele-manipulated reposition is performed based on 3D imaging data with a haptic device, joystick. Through experiments performed on artificial bones and human specimens, it was shown that their methods were possible, yielding good accuracies of fracture reduction in an intuitive and efficient way. However, it was performed on artificial bones without counteracting forces and torques caused by soft tissues. A design methodology on robot-assisted system for long bone fractures reduction was proposed in (Mukherjee, Rendsburg et al. 2005). Various design aspects of the robotic system were proposed, including the system design specification, robot design and analysis, motion control and implementation. The Xray images were processed and converted to a 3D model in CAD environment where preoperative planning can be performed. Preoperative planning as an important component in RAFR is beneficial to reducing expensive intraoperative time and the surgeon’s exposure to radiation, and 182

minimizing mistakes. A virtual planning of acetabular fracture reduction method was proposed in (Citak, Gardner et al. 2007). Based on experiments, it was reported that the average mal-reduction and time were significantly reduced by using 3D virtual planning compared to the conventional 2D planning methods. They also claimed that virtual simulators may assist the surgeon in understanding the fracture anatomy and bony topography by allowing surgeon to segment and manipulate fracture fragments An experimental computer program for virtual operation of fractured pelvis and acetabulum was proposed in (Cimerman and Kristan 2007). The program was composed of two closely integrated tools, the 3D viewing tool and the surgeon simulation tool. Fracture reduction can be performed by manipulating bone fragments in three planes. Afterwards fixation can be undertaken as well. The authors believed that the virtual system can bring significant value and new opportunities in preoperative planning, teaching and research. An interactive algorithm for semi-automatic repositioning of bone fractures was proposed in (Scheuering, Rezk-Salama et al. 2001). In the process, firstly, manual positioning was taken to initially navigate the bone fragments which are represented in volumetric data. Then, an optimization method based on Powell’s algorithm for multidimensional minimization was used to exactly reposition the fragments. Octree structures were utilized in order to accelerate the volumetric collision detection. Though automatic fracture pattern recognition attracts few researchers’ attentions, it potentially plays an important role in RAFR. A fracture pattern recognition method (Winkelbach, Westphal et al. 2003) was proposed for computeraided and semi-automatic fracture reduction. An adapted kind of Hough Transformation was used to calculate the orientation and position of the cylinder axes for each bone fragments. And surface registration techniques were exploited to 183

compute the relative transformations between corresponding fragments. It was reported that they achieved a better reposition precision and fracture reduction with their method than simple landmark based methods do.

A virtual robot-assisted fracture reduction simulation system is proposed and developed in this section. Based on the hypothesis that virtual reality (VR) techniques can help the surgeon to plan and rehearse reduction maneuvers, two goals in the system are aimed at, preoperative planning of fracture reduction and training of surgeons’ operation skills. Preoperative planning, such as fracture feature recognition, robotic path planning is achieved based on 3D bone model extracted from real patient’s CT data. In order to mimic a real operation scenario of fracture reduction, a virtual environment is built by modeling a virtual medical robot, and human femoral fragments and soft tissues. The femoral fragment can be intuitively manipulated by a user through a haptic device as shown in Figure 6-15. Simultaneously, corresponding robotic poses are calculated and used to drive the robot. The scaled elastic forces caused by soft tissues attached to femoral fragments, and collision forces simulating palpation in real operation are sent to the user through the haptic device, thereby enhancing the operation feeling and facilitating fracture reduction. During the telemanipulation, robotic poses can be recorded to form a primitive robotic path, and the user’s performances can be evaluated by examining the manipulation time and alignment accuracy as well.

184

Figure 6-15 Schematic of haptic-aided virtual bone fracture reduction system.

6.6.1 System overview The program of our virtual bone`fracture reduction system is composed of the following modules (Figure 6-16): virtual bones and soft tissues, a virtual robot, robotic inverse kinematics, haptic rendering, collision detection, and fragment matching. A user can grasp the stylus of the haptic device to manipulate the virtual robot in the virtual fracture reduction procedures. This is achieved by the following steps. Firstly, the system captures the position of the haptic interface. Secondly, it uses inverse kinematics to calculate the robot joint angles. Thirdly, these joint values are used to update the geometric robot model in real time. In the fracture reduction process, the bone fragment is manipulated by a user through the robot arm. Elastic forces are sent back to the user to mimic the forces caused by soft tissues being pulled by the user. The user must overcome these constraint forces to manipulate the fragment to the desired positions. On the other hand, the collision detection module checks the poses of the two fragments in real time. If there is any collision, the collision information is sent to the haptic rendering module for calculation of forces. Then, these forces will be sent to the haptic interface. As a result, the user can feel

185

palpation during fragment alignment process. If the user thinks the poses of the fragment being manipulated are in the right place, he/she can terminate the reduction process. And the system will print out messages to show the user’s performances calculated by the fragment matching module.

Figure 6-16 Block diagram of virtual femoral fracture reduction system.

6.6.2 Fracture modeling 6.6.2.1 3D model reconstruction

Mimics® (materialise) is used to reconstruct 3D femur from CT/MRI data in the DICOM format. As can be seen from the Figure 6-17(a), the patient’s hip images are loaded into the system and appear in four different views. The images in the top right view are called the axial images (XY-view or top-view). The upper left view shows the coronal images that are the images re-sliced in the XZ-direction (frontview). The lower left view shows the sagittal images that are the images re-sliced in 186

the YZ-direction (Side-view). The lower right view shows the 3D view. A range of threshold value, 226~1533 (Hounsfield units) in this case is applied to produce a segmentation mask. Then, a region growing tool is used to separate the femur from the mask. Lastly, the 3D intact femur is reconstructed as shown in Figure 6-17(a). The 3D soft tissue can also be reconstructed as shown in Figure 6-17(b). The threshold values range from -151 to 65 in Hounsfield units in this case sample.

(a)

187

(b) Figure 6-17 3D femur and soft tissue reconstruction from CT data using Mimics®.

6.6.2.2 Fracture approximation

There are three typical patterns of femoral fracture as shown in Figure 6-18, including oblique, comminuted, and spiral. Oblique is a kind of fracture which goes at an angle to the axis. Comminuted is the one with many relatively small fragments. Spiral is the one that runs around the axis of the bone. For pathological diversity, a CAD tool, SensAble Technologies’ FreeFormTM is used to model various fracture patterns based on the femur model (Figure 6-17) extracted from a real patient’s CT data as mentioned in the preceding section. Firstly, the surface model of the femur is imported into the system. Then, it is converted to a voxel model as shown in Figure 6-19(a). Next, a fracture contour for oblique fracture in this case is designed on the femoral shaft as shown in Figure 6-19(b). Lastly, the femur can be divided into pieces along the cutting contour as shown in the exploded view of Figure 6-19(c). 188

Comminuted and spiral fracture patterns can also be designed in a similar way as shown in Figure 6-19(d) and (e).

Figure 6-18 Schematic of femoral fracture types.

Spatial

relationship

between fractured fragments includes distraction,

displacement, and angulation, as shown in Figure 6-20. Distraction is separation in the longitudinal axis. Displacement is the degree to which the fractured ends are out of alignment with each other. Angulation is the angle of the distal fragment measured from the proximal fragment. These relationships can be easily set by manipulating one fragment with the haptic interface.

189

(a)

(b)

190

Figure 6-19 Femoral fracture geometric mimicking using FreeFormTM.

(a) distraction;

(b) displacement;

(c) angulation.

Figure 6-20 Spatial relationship between fracture fragments.

6.6.2.3 Fracture features

The femoral bone is essentially a tubular structure. We assume that the structure can be approximated by a cylindrical structure which is defined by its height and shape of its cross section. Its orientation is determined by its axis and the rotation angle. For fracture reduction, this structure can be exploited to reposition femoral fragments. One of the alignment criteria is the axis matching. If the axes of two bone fragments are superimposed, then these two fragments are thought to be matched in 191

terms of axis. Given axis matching, another criterion is the fracture area matching which is measured by axis rotation angle and distance between fracture areas. The problems are how to approximate their axes and how to judge whether the fracture areas are matched or not.

For the first problem, one solution, e.g. (Faber and Fisher 2001), is to fit a cylindrical model to a set of points and minimize an error function. Other approach, e.g. (Winkelbach, Westphal et al. 2003), takes advantage of the fact that intact cylindrical surface normals are perpendicular to the cylinder axis. We adopt a method similar to that in (Faber and Fisher 2001) to approximate the cylinder axis as explained in Figure 6-21(a). This method has the following three steps. Firstly, we cut the bone fragment with several parallel planes which are nearly perpendicular to the cylinder axis. Several closed curve pairs can be obtained from the intersections between these cutting planes and bone fragment. Secondly, the center position of a curve can be approximated by fitting the curve with a circle. Lastly, a straight line approximating the cylinder axis is obtained by using a least square method. The first step is shown in Figure 6-21(b)-(e). A cutting plane can be manipulated and positioned by a user through the haptic interface. After the plane is almost perpendicular to the femur shaft, a circle is sketched on that plan as shown in Figure 6-21(b). Then, the circle is used to slice the bone. As a result, two closed curves are formed as shown in Figure 6-21(c) and (d). A similar method is used to produce these curve pairs as shown in Figure 6-21(e).

192

Figure 6-21 Approximation of femoral axis.

For the second problem, a surface registration techniques like range data correlation is used to match fracture area automatically in (Winkelbach, Westphal et al. 2003). This method cannot be used in our interactive fragment repositioning because it is based on the assumption that the axes are already matched. However, this is not the case in the interactive repositioning. An interactive method on fracture surface registration is proposed in this paper. Edges on fracture areas can be marked 193

out as shown in Figure 6-22(a) and (b). We assume that two fractured fragments can be perfectly matched on fracture areas. The idea is to manually mark the point-point pairs on the closed curves at the edges of fracture areas as shown in Figure 6-22(c). Then, these point-point pairs are attached to their fragments respectively. If the fragment is manipulated by a user, the positions and orientations of its points are updated accordingly as well. In the manipulation process, this method continuously checks the sum of the distances between these point-point pairs. If it reaches a minimum value, then it is considered that the registration on fracture areas is achieved.

Figure 6-22 Manual selection of point-point pairs for fracture matching.

194

6.6.3 Haptic modeling Two kinds of forces are modeled, namely, elastic force and collision force. The elastic force is caused by the soft tissues attached to femur when the bone fragment is pulled. We assume the elastic force is mainly attributed to muscle. There are two types of muscle forces: active and passive forces. The active force is generated by the muscle contraction when it is stimulated from nerves. Its length-tension curve is described by a sliding-filament model (Huxley 1987) and has its maximum at the muscle' s normal resting length (l0) as shown in Figure 6-23. We assume that muscle contraction is quasi-static during fracture reduction process. The active force (fa) is simply modeled by a Gaussian function in the following form:

fa = a ⋅ e



( x −l0 )

, l0 − c < x < l0 − c 0, else

2c 2

6-15

where x is the current muscle length, a is the activation, and c controls the width of the “bump”. The passive force (fp) is due to the passive stiffness of tendon and muscle. The following equation is used to describe this force.

b( x −l 0 ) r , l0 < x fp = 0, else

6-16

where b and r are constants related to mechanical properties of tendon and muscle. The elastic force (fe) exerted by muscles in the reduction process takes the following form:

fe = f a + f p

6-17

Figure 6-24(a) depicts the curves of the elastic forces vs. time with different activation. In the experiment, one fragment is translated away from the other along 195

the femoral axis direction with a constant velocity. The parameter, r in Eqn. 6-16 is set to 2. The collision force is simply calculated with the following equation:

f c = kd ,

d >0

6-18

where k is a stiffness constant, and d is collision depth. Therefore, the total force (f) is the sum of the elastic force and collision force.

f = fe + fc

6-19

Figure 6-24(b) plots the curve of the collision force vs. time during an experiment in which one fragment is translated towards the other along the femoral axis direction with a constant velocity. It can be seen from the curve that the collision force increases linearly as one fragment touches and penetrates the other. When the collision depth reaches a value, the force remains a maximum value for a while till it decreases quickly to zero. If a user applies torsion on the femoral fragment being manipulated, torque (T) which plays an important role in 3D manipulation is calculated by the following equation:

T = λ (α − α 0 ) m where

6-20

and m are constants,

around its axis, and

0

is the current rotational angle of the bone fragment

is the resting rotational angle. In the manipulation, the axis

stylus is assumed to always superimpose with that of the femoral fragment, and the torque direction is opposite to that of the rotation. Figure 6-24(c) shows the curve of the torque vs. time during an experiment in which one fragment is rotated around the femoral axis with a constant angular velocity. The parameter, m in Eqn 6-20 is set to 2.

196

Figure 6-23 Length-tension curve of muscle.

(a)

197

(b)

(c) Figure 6-24 Plots of reduction force.

198

6.6.4 Implementation and results

An experimental system is developed as shown in Figure 6-25. The system graphic user interface (GUI) as shown in Figure 6-26 is programmed based on OpenInventorTM. The GUI includes three windows. The left one is used to show the whole virtual environment where the femoral bone is invisible. The right windows are used to show the top and left views of the femur as if they are taken from the X-ray instruments.

In order to illustrate the benefits of the virtual fracture reduction system for training purpose, a user with no such experience is asked to conduct fracture reduction. In the experiment, both visual and haptic feedbacks are provided to assist the user to reposition the fractured fragments. The user’s performance is evaluated by the following factors: time, registration quality. Registration quality includes accuracies of axis alignment and distance between point-point pairs in fracture areas. We can observe from Table 6-2, after several trials the trainee needs less time to finish the process. This might be explained as the trainee becomes familiar with the process after he/she conducts more experiments. However, the improvement of alignment accuracies is not so obvious.

199

Figure 6-25 Photograph of system setup.

Figure 6-26 System GUI.

200

Table 6-2 A user’s performance in virtual fracture reduction.

6.6.5 Conclusions and discussions A virtual robot-assisted fracture reduction system based on virtual telemanipulation using haptics is proposed in this section. Details on data pre-processing, fracture modeling are elaborated. Especially, in the data pre-processing, we propose methods on geometric approximation of fracture, and fracture registration. One of our goals is to develop convenient tools and vivid virtual environments for preoperative planning of fracture reduction which is expected to improve reduction performances and reduce radiation exposure in fracture reduction. In the system, a user can intuitively manipulate a robot arm through a haptic device to reposition the fracture bone. The scaled forces simulating those in real operation can assist the user to re-position the fractured bone efficiently. During the tele-manipulation, robotic poses can be recorded to form a primitive robotic path, and the user’s performances can be evaluated by examining the manipulation time and 201

alignment accuracy. Therefore, the other goal of this system, namely, training of fracture reduction can be achieved. In future work, the soft tissue deformation will be taken into account. When a user manipulates a fractured bone, the soft tissue, such as skin and muscles will be stretched. Though, the deformation might be small, it will greatly increase the system fidelity. However, it is a challenging issue since it requires real-time calculation of deformation and the interactions between muscles and bones are complex. Another challenging issue is how to realistically simulate the elastic force and torque that are caused by soft tissue deformation. Since relations between displacement or rotation angle and forces or torques in intraoperative fracture reduction are not reported in literature so far, a simple method based on a model of muscle sliding-filament is proposed in this section. Yet, this method needs to be validated by experiments in the future.

6.7 Haptic-aided robotic path planning Path planning has been extensively studied for several decades. It plays a key role in building autonomous or semi-autonomous systems. Besides its application in robotics, path planning is used in assembly maintainability (Chang 2002), computer animation (Koga, Kondo et al. 1994) and computer numerical control (Zhu and Lee 2004). Theoretically, many solutions to automatic path planning have been proposed (Canny 1988; Latombe 1991; Hwang and Ahuja 1992; Sharir 1997; Gupta and Pobil 1998). However, very few planners have been applied in industry for robots with many DOFs, due to the high complexity and dimensionality of the configuration space (C-space). Many automatic planners are only effective in some specific 202

scenarios and sometimes fail due to the difficulty of finding critical configurations which are crucial for the resulting path (Amato, Bayazit et al. 1999). However, these critical configurations can be easily perceived by a human user. By exploiting human’s intuition, the robustness of automatic path planning can be improved. On the other hand, manual path planning, such as on-line teaching, is widely used for robotic path planning in industry. In the teaching process, an operator moves a robot to each desired position and records its path, which is used to generate robot program that will replay the same path automatically. Though it is very simple, the most significant disadvantage of on-line teaching is that it occupies valuable production equipment. More recently, off-line teaching has been increasingly adopted in industry for robot programming due to the availability of more powerful hardware, computing methods and high-quality CAD models (Mitsi, Bouzakis et al. 2005). In this mode, an operator manipulates a virtual robot to a sequence of end-effector positions in a computer-simulated environment. Then the simulated robot controller, identical to the real one, interpolates these positions and drives the robot along the path. At the same time, collision between the robot and the environment is detected automatically. According to the collision information the path can be modified manually or semiautomatically to produce a collision-free trajectory. In this approach, calibration must be done before the program is loaded to the robot control system for execution. The biggest advantage of off-line programming is that it does not occupy production equipment, thereby greatly reduce costs. However, in traditional off-line teaching, manipulation of a virtual robot through keyboards and mouse is a nontrivial job for a user. For example, EASY-ROB™ is a commercial product which can be used for off-line programming (EASY-ROB). 203

There are two modes for manipulating a robot in a virtual environment as shown in Figure 6-27. In the first mode, an operator can use the mouse to move the tool center point (TCP) of the robot. If he/she presses the left mouse button, then he/she can move the TCP along x-axis, middle button for y-axis, and right button for z-axis. However, using this method, it is difficult to define robot orientations. He/she must input values of orientations into a dialog, and these values are obscure to an operator. In the second mode, an operator can pick up one of the robot joints and use the mouse to turn it around. These two modes are both unwieldy. The robot manipulation methods in most current commercial products of robotic path planning are similar.

Figure 6-27 Robot manipulation based on mouse and keyboard in EASY-ROBTM.

Figure 6-28 clearly illustrates that it is easier and more intuitive to manipulate and program a virtual robot by means of the virtual tele-operation system in comparison with traditional computer interfaces. In Figure 6-28, the user moves the 204

virtual robot arm as if he/she is holding a real robot arm. When collision occurs, the user can feel it and the motion path can be changed immediately. Figure 6-29 shows the schematic structure of our haptic-aided path planning system that consists of the following parts: a virtual robot, an automatic path planner, path optimization and verification, haptic rendering, collision detection, inverse kinematics, and graphic rendering. Forces and torques computed according to collision information are sent to the user through the haptic interface. If it is necessary, a path generated by the path planner needs to be modified by the user through the haptic interface. Afterwards, path verification is used to validate the modified path.

Figure 6-28 Haptic-aided virtual tele-operation for robot path planning.

205

 



      

-   

# $           

           !! "     " 

) 

*!+ ,

,

 !

%  $&'! 

% 

$  '(  

% 

$    ( " 

Figure 6-29 Diagram of the haptic-aided path planning system.

6.7.1 Related work Many automatic motion planning algorithms have been proposed in the literature (Canny 1988; Latombe 1991; Hwang and Ahuja 1992; Sharir 1997; Gupta and Pobil 1998). Most of the current approaches to path planning are based on the concept of Cspace (Lozano-Perez and Wesley 1979), which is the set of all possible configurations

of a robot. The high dimensionality of the C-space of many-DOF robot is the main reason for the difficulties of the problem (Hwang and Ahuja 1992). It has been shown that the path planning for a 3-D linkage made of polyhedral links is a PSPACE-hard problem (Reif 1979). This analysis provides evidence that any automatic planner will run in exponential time in the number of degrees of freedom. The prohibitive complexity of automatic path planners has motivated the development of heuristic 206

planners (Barraquand and Latombe 1991; Gupta and Pobil 1998). However, these approaches cannot be easily extended to robots with more than 4 or 5 DOFs. Though several heuristic techniques have been proposed for many-DOF robots, they cannot guarantee performance (e.g., sometimes they fail to solve seemingly simple problems). One of the important practical motion planners for many-DOF robot was proposed in 1991 by Barraquand and Latombe, called the randomized path planner (RPP) (Barraquand and Latombe 1991). It used “down motions” to track the negated gradient of a potential field and “random motions” to escape local minima. Later, a probabilistic roadmap (PRM) method was proposed by sampling the C-space at random and connecting the samples in free space by local paths to avoid pathological cases caused by the deterministic potential field in randomized planning (Kavraki, Svestka et al. 1996). The main issue with PRM planners is the “narrow passage” problem (Hsu, Kavraki et al. 1998): if the free space contains narrow passages, the planner must pick a prohibitively large number of samples over the entire free space. Otherwise, it possibly fails to find the solution due to missing some “critical” configurations which are often easily perceived by an operator. Therefore, a method was proposed in (Amato, Bayazit et al. 1999) for enabling a human operator and an automatic motion planner (PRM) to cooperatively solve the motion planning problem. In the approach, the operator manipulated a virtual object using a haptic interface, captured configurations and passed them to the planner. Then, the planner used these configurations to generate a valid path. However, only two very simple examples, problems of flange and alpha puzzle, were used to evaluate their methods. No haptic interface was used to control a virtual robot. Other interesting work on combining haptic techniques with path planning was reported in (Zhu and Lee 2004; Galeano and Payandeh 2005; Yang and Chen 2005). 207

A haptic-aided path planner was proposed in (Galeano and Payandeh 2005) where by using a 3-DOF haptic device, an operator could manipulate an object for which a path needed to be defined and be guided by natural and artificial forces. It was claimed that the artificial force constraints could make path planning more time efficient in contrast to traditional approaches based on computational geometry. In (Yang and Chen 2005) an off-line inspection path generation methodology for virtual coordinate measuring machines based on haptics was proposed. It was reported that the inspection path planning process became more intuitive, efficient, and user-friendly, which was attributed to force feedback and three dimensional positioning functions provided by haptic device. In (Zhu and Lee 2004), techniques of 5-axis pencil-cut machining planning with a 5-DOF output haptic interface were presented. Dexelbased modeling was used for global tool interference avoidance with other components of a machining environment. However, in the 5-axis pencil-cut tool planning, the tool was only equivalent to a single link of the virtual robotic arm as presented in this paper. Therefore, the complexity in collision avoidance in our proposed research is more demanding. It was shown by the experiments (Zhu and Lee 2004) that the involvement of a haptic system in 5-axis tool path planning was promising.

6.7.2 Path planning 6.7.2.1 Automatic path planning

The task of path planning for robots in known environments is to find a collisionfree path from an initial configuration to a goal configuration. Most reported path planning methods such as PRM (Kavraki, Svestka et al. 1996) are based on the 208

concept of C-space (Lozano-Perez and Wesley 1979), as explained in Figure 6-30. PRM samples the C-space at random and retracts the collision-free configurations as milestones. It connects these milestones to construct a topological graph called a

roadmap, and seeks a collision-free path, which connects the initial configuration (qi) to the goal configuration (qg), as shown in Figure 6-30(a).

Figure 6-30 Schematic of model space and configuration space.

The technique of delayed collision checking (Bohlin and Kavraki 2000) can greatly reduce planning time since PRM planners spend most of their time performing collision checking. A Lazy PRM planner assumes that all random configurations and connections between them are collision-free. Then it computes the shortest path in the roadmap and only checks collision for those configurations on this path.

209

There are two main query strategies of PRM, multi-query and single-query (Kavraki, Svestka et al. 1996). The process of multi-query is as follows: 1) randomly sample a large number of configurations, keeping any that are not in collision to create a milestone set; 2) using a local planner, attempt to connect pairs of samples that are relatively close to each other by thoroughly sampling and collision checking configurations between them to construct a roadmap; 3) to query the roadmap, first attempt to connect qi and qg to the existing graph. If that is successful, search the graph for a path from start to goal using any standard graph search method, as depicted in Figure 6-31(a). In single-query, a roadmap is constructed from qi or qg respectively or concurrently. There are two sampling strategies for a single-query planner, singledirectional sampling and bi-directional sampling. In single-directional sampling, the planner expands one tree of milestones from either qi or qg, until a connection is found with the other tree. The tree expansion is done concurrently from qi or qg in bidirectional sampling, which is more efficient than single-directional sampling. In multi-query, the entire free space must be explored during pre-computation of a roadmap. While in single-query, less configuration space is needed to explore and it is more suitable for solving the dynamic path planning problems. As a PRM planner, Single-query, Bi-directional, Lazy in collision checking (SBL) is proposed in (Sanchez and Latombe 2001). SBL exploits advantages of single-query, bi-directional sampling strategies, and delayed collision checking to greatly improve query efficiency. It doesn’t immediately check collisions of connections between milestones, until these two trees can be bridged and they are on the possible path (Figure 6-31(b)). It is reported that SBL can cut path planning time by factors from 4 210

to 40 in comparison with a similar planner using a traditional collision-checking strategy.

Figure 6-31 Schematic of PRM and SBL.

Though, SBL can solve most of the problems, sometimes it will fail to obtain a satisfactory path when planning path in crowded environments. This is because it is difficult to find critical configurations automatically. The path obtained from the methods based on SBL cannot be directly used for robot programming without applying post-processing optimization techniques. The path generated by SBL is often irregular as shown in Figure 6-32(a). A simple optimization technique, path 211

smoothing, is used in SBL, as illustrated in Figure 6-32(b). It can be seen that the path after the smoothing process is much smoother, shorter and regular. However, it is still not satisfactory. Sometimes, it is too close to obstacles, or it extends to undesirable directions. Considering planning errors, the path may not be safe in a real operation. The unsatisfactory result is mainly because that the planner missed some critical configurations when randomly setting milestones in free configuration space. These missed critical milestones are very easy to be perceived in 3D by a human user. Hence, with these observations, a semi-automatic path planner is inspired for planning an optimized path with the cooperation of computer and human operator, as described in the next section. The major role of the operator is to select/define critical configurations. The rest of the path planning task is done by the computer automatically.

(a) Screenshot of the irregular path generated by SBL.

212

(b) Screenshot of the path after simple automatic optimization.

Figure 6-32 Screen snapshots of the path generated by SBL.

6.7.2.2 Semi-automatic path planning

In the proposed planning process, a user needs to provide critical configurations qc (Figure 6-33(a)) in addition to initial configuration qi and goal configuration qg.

These critical configurations are inserted manually by manipulating the virtual robot through a 6-DOF haptic interface. The final path will be constrained to these configurations. Hence, a user-preferred path is generated automatically based on these configurations. For example, two critical configurations, qc1 and qc2, are selected by a user as shown in Figure 6-33(b). First, the path planner directly connects these configurations from qi to qc1, qc2 and qg to form a path. Then collision checking of the path is performed automatically. If any collision is detected in the path from qi to qc1, then SBL is used to form a new path between them by considering qc1 as a new goal configuration. Figure 6-33(c) and (d) show the planned path generated by the semiautomatic planner by means of manually providing one critical configuration. The path in Figure 6-33(c) is not automatically optimized and its length is much smaller 213

than that in Figure 6-32(a) due to the insertion of a critical configuration. The path in Figure 6-33(d) is optimized using the method as will be explained in the next section. The inserted critical configuration makes the path smoother and shorter in comparison with that generated by automatic methods. If the user provides several good configurations, the planner may generate a good path without the need for subsequent optimization. It means that no other milestones are needed in the roadmap other than the manually provided configurations to generate the path. This is demonstrated by Figure 6-33(e) where no optimization method is used.

(a) Insertion of one critical configuration.

214

(c) The path generated by the semi-automatic planner with one critical configuration.

215

(d) Screenshot of the user-preferred path.

(e) Screenshot of the user-preferred path generated by the insertion of two critical configurations.

Figure 6-33 The semi-automatic path planning method.

It should be noted that the manual selection of critical configurations through virtual tele-operation is intuitive and easy using the haptic interface. By letting the user define some critical configurations, the user’s knowledge on the actual working environment is incorporated in the path planning. 216

The performance comparisons of SBL and our method (P1, P2) are given in Table 6-3 and Table 6-4. In Table 6-3, no optimization technique is applied, while a simple automatic path-smoothing method is used with resulting data listed in Table 6-4. One critical configuration is provided for the planner #1 (P1) and two critical configurations for the planner #2 (P2). Four metrics are used to measure the performances of the planners, planning time, path length, average manipulability, r, and security, s. Integral manipulability is used to measure motion capabilities and it is defined as:

r=

n i =1

wi n

6-21

where n is the total number of configurations in the path, and wi is manipulability (Yoshikawa 1985) at configuration i, which is defined by:

wi = det J i

6-22

for non-redundant manipulators, while for the redundant manipulators it is given by:

wi = det( J i J iT )

6-23

where Ji is the Jacobian matrix at configuration i. The manipulability r, is important to characterize the ability of a robotic arm to move quickly its end-effector in any direction of its operational space in response to given joint velocities. If it is bigger, the better motion ability it has in the specified path. If the path is too close to the obstacle, even though it is collision-free, it is not safe in view of the errors of the simulation system. Therefore, a security index s is proposed as (to see Appendix B):

s=n

n i =1

1 di

6-24

217

where di is the smallest distance between robot and obstacles at configuration i, and n is the number of configurations on the path. From the definition, we can see that if the security index s is bigger then the robot will be safer in the movement along the path. From Table 6-3, we can see that P2 is the fastest method and its path length is the shortest. As far as planning time and path length are concerned, P1 is better than SBL. Their differences of average manipulability are not so apparent. When

optimization is applied to both the SBL based method and our method, the data are listed in Table 6-4. It can be seen that the path length of SBL can be greatly reduced. However, the security s after optimization drops a lot. From Table 6-4, the path generated by the proposed method (P1) still outperforms that of SBL.

Table 6-3 Comparison of SBL and semi-automatic path planning without optimization (P1, P2).

Table 6-4 Comparison of SBL and semi-automatic path planning with optimization (P1).

218

6.7.2.3 Path modification and verification

A robot path from the above described method must be modified and verified before the robot program is generated since the robot path tends to be irregular. There are two path modification methods in our system, the automatic optimization and manual modification methods. The first one tries to find a shortcut between a random pair of points on the path. If the shortcut is free of collision, then the path between them is replaced by a straight line segment. Then, it recursively checks the path, till no shortcut can be found. The second one is simply to let a user manually edit the generated path. If the user thinks a path is too close to obstacles or not in the right poses, he/she can pull some key configurations to the desired poses through virtual tele-operation. Therefore, a user-preferred path is generated. During the process, the user manipulates the haptic interface to guide the robot Tool Center Point (TCP). For ease modification, a magnetic force effect is applied through the haptic interface as has been explained in section 3.1.1 of Chapter 3. When the user moves the TCP to a critical configuration (Figure 6-34) within a distance, α , an attracting force will be exerted on the haptic interface to pull it towards that point. Now, the user can redefine the critical configuration. And the critical configuration on the path is replaced by the current configuration. In the process of path verification, the virtual robot is moved automatically according to the generated path. At the same time, collision detection is done at a rate of 30 Hz. If there is any collision, the robot will not stop, instead, collision points are marked out. After the execution, the user can easily find which key configurations lead to collision. And path modification can be applied again, till a satisfactory collision-free path is generated. 219

Figure 6-34 Schematic of critical configuration selection in path modification.

6.7.3 Conclusions and discussions A haptic-aided path planning system which combines manual and automatic path planning methods is presented in this section. The advantages of human’s instinct and powerful computation of computers are employed to develop a semiautomatic path planner that can generate a user-preferred path and improve planning efficiency using a haptically-controlled virtual robot. This system is based on virtual tele-operation, automatic path planning and haptic rendering methods. The robot manipulation method based on virtual tele-operation provides an interactive tool for intuitively manipulating a robot arm during the robotic path planning. Forces and torques displayed based on the collisions between robotic links and obstacles, can be felt by a user in the course of path planning or critical configuration selection. The haptic clues may provide great benefits for setting robotic configurations in a complex virtual environment where interactions usually involve object-object collisions. The robustness of the path planning is improved by the provision of manual selection of critical robot configurations using a haptically-controlled virtual robot 220

arm. Using the haptic interface, the selection or definition of critical robot configurations is intuitive, easy and fast. Normally, only a few critical robot configurations need to be manually defined in a robot operation task. Even though the manual interaction has been introduced early in the path planning process, it might reduce the likelihood of potentially more troublesome manual editing of a robot program that is generated from automatic methods. The robot as shown in the experimental case is of 6-DOF. In fact, robots with more general structures can be designed and modeled according to the methods described in this paper, because the haptic renderizng method is independent of robot structures.

6.8 Conclusions and discussions In this chapter, we present an algorithm of virtual tele-operation based on robotic modeling methods and haptic techniques. How to physically model a multiDOF robot is explained in detail. Two methods are proposed for 6-DOF haptic rendering in the manipulation of a virtual robot, direct rendering and simulation-based methods as have been presented in section 3.2 of Chapter 3. Based on the virtual teleoperation method and haptic rendering methods, several haptic VM prototyping systems are presented. Currently, we focus on the master side of tele-operation, namely haptic rendering and simulation. For possible future work, a real 6-DOF robot, ABB robot as shown in Figure 6-35 might be integrated into our system to perform the tasks such as remote metal cutting and part assembly.

221

Figure 6-35 An ABB® robot for the haptic tele-operation system.

222

CHAPTER 7. CONCLUSIONS AND FUTURE WORK

In this thesis, original haptic rendering models for major interactions in virtual manufacturing have been presented. These models have been applied to haptic-aided reverse engineering, haptic-aided virtual machining, and virtual robotic assembly.

7.1 Summary of contributions Major contributions of this research are listed as follows: (1)

Based on the analysis of the proxy-based constraint methods, a force-based snap algorithm is proposed. When a user manipulates the tip of the haptic interface to traverse the borderline of the snap field, it can avoid “force leap” which is usually caused by the proxy-based constraint method. Using this algorithm, it is easier and safer for a user to select a geometric entity on a 3D model.

(2)

For better fidelity and fast haptic rendering, a Back-Propagation (BP) neural network is proposed for modeling the cutting forces in the turning simulation. It provides a good modeling tool for simulations that are influenced by several parameters in a non-linear pattern. The neural network is trained by experimental data and separated from the simulation system so that real-time simulation is achieved.

(3)

A simulation-based 6-DOF haptic rendering method is proposed. In this method, a direct proxy rendering method and a virtual coupling method are used to decouple the rigid body dynamic simulation from haptic thread in order to alleviate the computation demand. Using this method, various dynamic 223

interactions through a haptic interface in the complex virtual environment can be accomplished. (4)

A remote 6-DOF haptic rendering algorithm is proposed in order to improve the richness of haptic interactions over network by means of a remote virtual coupling method.

(5)

A volume-based haptic rendering method is proposed for drilling simulation. The influence of the volumetric resolution and erosion to drilling forces are investigated in deal. The relations between drilling forces and tool parameters, together with cutting conditions are also studied.

(6)

A system based on a haptic interface is developed for filling holes or gaps of models scanned by optic digitizers. Two different methods, triangular meshbased method, and volume based method are proposed. Based on the first method, combining automatic and manual hole-filling methods, the system is flexible, robust and effective. For the second method, a physical-based 3D modeling tool, FreeFormTM is used to fill holes and gaps of scanned models. Several examples are given to illustrate the two methods. The strength and weakness of two methods are compared based on examples.

(7)

A virtual turning and grinding simulator based on haptics is developed. In cutting operation of the virtual turning and grinding system, a NURBS deformation algorithm is proposed for workpiece modeling which can accelerate haptic and graphical rendering. For grinding operation, a simple NURBS trimming technique is used. We use BP neural network to model the cutting force based on the data from experiments. In using the system, a user can manipulate a virtual tool to cut or polish a virtual workpiece with force feedback as if he/she is working on a real turning machine. During operation, 224

tool positions and their corresponding forces are recorded in order to provide data for analyzing the performance of tool paths. Vibration, friction and viscous effects felt by a user can improve the simulation fidelity. The proposed system provides an intuitive way not only for the modeling of three dimensional revolved objects but also for the training of lathe machining and grinding operations. (8)

A hatpic-based virtual drilling system is proposed and developed. In the system, a user can manipulate a virtual drill tool to freely drill a part with forces and torque feedback. The haptic rendering for drilling process is based on volumetric data with high resolution and multi-point collision detection. The volumetric data are generated by a voxelization method. The volumetric model is graphically displayed using an efficient graphic rendering method based on a local Marching Cubes algorithm. Implicit function and hierarchical data structure are used to speedup collision detection. The relations between force, model resolution, parameters of drill tool, and drilling conditions are also investigated in detail. This simulation can be used to educate trainees to be familiar with drilling process at the beginning of the training.

(9)

A haptic-based virtual tele-operation method is proposed, which can provide convenient tools for offline robot programming. It is based on robotic modeling methods and haptic techniques. How to physically model a multi-DOF robot and how to interactively manipulate a virtual robot are presented in detail.

(10) Based on the haptic-based virtual tele-operation method, a prototype system of robotic path following is developed. In the path following examples, we demonstrate that haptics and virtual constraints can improve the users’ performance. 225

(11) A virtual robotic assembly system is also implemented. In the virtual robotic assembly tasks, the prototype system can be used to train novices to understand the assembly process. The forces and torques can facilitate the users to perform assembly tasks. The system of virtual bone fracture reduction based on haptics provides convenient tools and vivid virtual environments for preoperative planning of fracture reduction which is expected to improve reduction performances and reducing radiation exposure in fracture reduction. (12) A haptic-aided robotic path planning system is developed based on the hapticbased virtual tele-operation method. In the system, the advantages of human’s instinct and powerful computation of computers are exploited to develop a semi-automatic path planner that can generate a user-preferred path and improve planning efficiency. The robustness of the path planning is improved by the provision of manual selection of critical robot configurations. Using the haptic interface, the selection of definition of critical robot configurations is intuitive, easy and fast. Haptic clues may provide great benefits for setting robotic configurations in a complex virtual environment where interactions usually involve object-object collisions.

7.2 Future work In this research, various haptic modeling methods for major interactions in virtual manufacturing are proposed. Several experimental systems are developed based on a haptic interface for haptic-aided reverse engineering, haptic-aided virtual machining, and haptic-aided tele-operation. Due to time constraint, the following aspects that need further development are pointed out.

226

(1)

Improvement of haptic-aided reverse engineering The triangular mesh-based hole-filling method is applicable to objects with

small or medium sizes. If the number of triangle is too large, the program cannot guarantee 1 kHz haptic update rate thereby inducing haptic instability. One solution might be to separate holes and boundaries from the other parts of the model. The other parts are only graphically displayed; and the holes and boundaries are both graphically and haptically rendered. For the volume-based method, because it needs model conversion, models with large number of triangles can be processed. On the other hand, the conversion of model representation raises model quality problem. High resolution is needed for model conversion, which results in high computation requirement.

Improvement of haptic-aided virtual machining

(2)

In the turning simulation, the dynamic part is modeled by a revolution NURBS surface which is defined by its control points. The surface quality is affected by its resolution of control points. Currently, we specify the resolution before the simulation starts. After the part is cut layer by layer, the profile length of the surface will become longer. Therefore, the surface quality will degenerate. The possible solution might be to add control points dynamically according to the distance between adjacent control points. In the drilling force modeling, the force and torque models are derived and the relations between drilling forces and cutting conditions, the radius of the drill head are depicted according to these models. It should be noted that the validation of thses models through experiments is needed in the future work. 227

(3)

Improvement of haptic-aided virtual tele-operation Local virtual coupling and remote virtual coupling are proposed for stable 6-

DOF haptic rendering in the tele-operation of a virtual robot. The advantages of virtual coupling techniques are reduced interpenetration and higher stability. However, the main disadvantages are that the coupling may introduce perceptible haptic artifacts which may be perceived by users when exploring complex shared virtual environments. How to reduce haptic artifacts in the system needs to be further studied. We try to extend our haptic rendering methods to complex shared virtual environments. It is preliminary because how to overcome network latency, jitter, and maintain synchronization are challenging issues that remain to be solved.

228

APPENDIX A: FORCE AND TORQUE MODELING FOR DRILLING SIMULATION The cutting power P can be estimated by the following equation (Choi and Jerard 1998): P = kp ·MRR

(A-1)

where kp is the unit power consumption and MRR is material removal rate. The MRR is calculated simply by (Yang and Chen 2003): MRR = MR / t

(A-2)

MR is the material removed in a period of haptic cycle and t is the period of haptic

cycle (typical value is 1 ms). The cutting power P can also be approximated by: P = ft ·f+tt ·

(A-3)

where ft is thrust force, f is feed rate, tt is torque of roll and

is angular rotating

velocity. The torque tt is calculated with: tt = fc ·d/2

(A-4)

where fc is the cutting force and d is the diameter of drill head. The cutting force fc is assumed to be proportional to thrust force ft: (A-5)

fc = ktc ·ft

where ktc is a constant related to geometrical parameters of drill head and workpiece material. From above equations we can get the thrust force ft and torque of roll tt: ft =

tt =

k p ⋅ MR

(A-6)

( f + 0.5k tc dω )t

k tc ⋅ k p ⋅ MR ⋅ d

(A-7)

(2 f + kdω )t

In the volumetric model, the removed material, MR is calculated by the following equation: 229

MR = l 3

n i =1

(A-8)

si

where l is the voxel size, and si is the volumetric scalar value. According to the Figure Figure 3-39, we rewrite the Equations (A-6) and (A-7) to get the vector expressions of thrust force Ft and torque T:

Ft =

T=

kp ⋅l3

OX i si ( f + 0.5k tc dω )t i =1 | OX i | k tc ⋅ k p ⋅ d ⋅ l 3

( 2 f + k tc dω )t

n

n i =1

(A-9)

(OX × OOi') si

(A-10)

where O is the current drill head center, Xi is the ith voxel position, OO’ is the unit vector of drill shank axis.

230

APPENDIX B: EXPLANATION OF PATH SAFETY INDEX

Figure A-1 The schematics of path safety evaluation.

One simple method of judging a path is safe or not is to check the distances at every configuration between a robot and the obstacle as shown in the above figure. If the distance di is smaller than a threshold , then this path might be seen as an unsafe path. The problem of this method is that we cannot know the exact value of

and to

what extent the path is unsafe. So, we want to obtain a quantitative method to evaluate the safety of the path. One simple method is just to calculate the average distance of the path to the obstacle using the following equation: s=

1 n

n i =1

(A-11)

di

where n is the number of configurations on the path. The problem of this method is that if only one point is very close to the obstacle and the others are far away from it, it is still seen as a safe path. Therefore, the following equation is proposed:

s = n/

N i =1

1 di

(A-12)

231

Using this method, the above problem can be solved. For example as shown in Figure A-1, if the distances are: 1, 0.1, 1.1, 0.9, 1.2 (mm), and the threshold,

might be

around 0.5 (mm). The safety indices are 0.86 based on Equation (A-11) and 0.36 based on Equation (A-12). In this situation, the path might be seen as a safe one because the index 0.86 is bigger than . And the second method can correctly judge the path is unsafe.

232

LIST OF PUBLICATIONS

1. Xuejian He, Yonghua Chen, Libo Tang, Modeling of Flexible Needle for Haptic Insertion Simulation, Proceedings of the 2008 IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems (VECIMS 2008), July 14-16, 2008, Istanbul, Turkey, pp.184-189. 2. L. Tang, Y. Chen, X. He, Compliant Needle Modeling and Steerable Insertion Simulation, Computer-Aided Design and Applications, Vol. 5, Nos. 1-4, 2008, pp. 39-46. 3. Y.H. Chen, L.L. Lian, X.J. He, Haptic Rendering of Three-dimensional Heterogeneous Fine Surface Features, Computer-Aided Design and Applications, Vol. 5, Nos. 1-4, 2008, pp. 1-16. 4. Xuejian He, Yonghua Chen, Six-Degree-of-Freedom Haptic Rendering in Virtual Teleoperation, IEEE Transactions on Instrumentation and Measurement, Vol.57, No.9, 2008, pp. 1866-1875. 5. Xuejian He, Yonghua Chen, Libo Tang, Haptic simulation of flexible needle insertion, Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics, December 15-18, 2007, Sanya, China, pp. 607-611. 6. Libo Tang, Yonghua Chen, Xuejian He, Magnetic Force Aided Compliant Needle Navigation and Needle Performance Analysis, Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics, December 1518, 2007, Sanya, China, pp. 612-616. 7. L.B. Tang, Y.H. Chen, X.J. He, Multi-material Compliant Mechanism Design and Haptic Evaluation, Virtual and Physical Prototyping, Vol.2, No.3, September 2007, pp. 155-160. 8. XJ. He, YH. Chen, Bone Drilling Simulation Based on Six Degree-ofFreedom Haptic Rendering, EuroHaptics 2006 Conference, Paris, France, July, 2006, pp. 147-152. 9. Xue J. He, Yong H. Chen, A haptics-guided hole-filling system based on triangular mesh, Computer-Aided Design & Applications, Vol. 3, No. 6, 2006, 233

pp. 711-718. 10. Xuejian He, Yonghua Chen, A Haptic Virtual Turning Operation System, Proceedings of the 2006 IEEE International Conference on Mechatronics and Automation, Luoyang, China, June 25-28, 2006, pp. 435-440.

234

REFERENCES

Adachi, Y., T. Kumano, et al. (1995). "Intermediate representation for stiff virtual objects." Proc. IEEE Virtual Reality Annual Intl. Symposium 95: 203-210. Agus, M., A. Giachetti, et al. (2003). "Adaptive techniques for real-time haptic and visual simulation of bone dissection." Virtual Reality, 2003. Proceedings. IEEE: 102109. Allotta, B., F. Belmonte, et al. (1996). "Study on a mechatronic tool for drilling in the osteosynthesis of long bones: Tool/bone interaction, modeling and experiments." Mechatronics(Oxford) 6(4): 447-459. Amato, N. M., O. B. Bayazit, et al. (1999). "Providing haptic ' hints'to automatic motion planners." Proceedings of the Phantom User' s Group Workshop. Antonishek, B., D. D. Egts, et al. (1998). "Virtual Assembly Using Two-Handed Interaction Techniques on the Virtual Workbench." Proceedings of the 1998 ASME Design Technical Conference and Computer in Engineering Conference: 13-16. Balijepalli, A. and T. Kesavadas (2003). "A haptic based virtual grinding tool." Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings. 11th Symposium on: 390-396. Balijepalli, A. and T. Kesavadas (2004). "Value-addition of haptics in operator training for complex machining tasks." Journal of Computing and Information Science in Engineering 4(2): 91-97. Barraquand, J. and J. C. Latombe (1991). "Robot Motion Planning: A Distributed Representation Approach." The International Journal of Robotics Research 10(6): 628. Basdogan, C., C. Ho, et al. (1997). "A ray-based haptic rendering technique for displaying shape and texture of 3D objects in virtual environments." Proceedings of the ASME Dynamic Systems and Control Division 61: 77-84. Basdogan, C., C. H. Ho, et al. (2000). "An experimental study on the role of touch in shared virtual environments." ACM Transactions on Computer-Human Interaction (TOCHI) 7(4): 443-460. Benali-Khoudja, M., M. Hafez, et al. (2004). "Tactile interfaces: a state-of-the-art survey." Int. Symposium on Robotics. Bohlin, R. and L. E. Kavraki (2000). "Path planning using lazy PRM." Robotics and Automation, 2000. Proceedings. ICRA' 00. IEEE International Conference on 1. Burdea, G. C. (1996). Force and touch feedback for virtual reality, John Wiley & Sons, Inc. New York, NY, USA. 235

Butterworth, J., A. Davidson, et al. (1992). "3DM: a three dimensional modeler using a head-mounted display." Proceedings of the 1992 symposium on Interactive 3D graphics: 135-138. Buttolo, P., R. Oboe, et al. (1997). "Architectures for shared haptic virtual environments." Computers & Graphics 21(4): 421-429. Canny, J. F. (1988). The Complexity of Robot Motion Planning, MIT Press. Celniker, G. and D. Gossard (1991). "Deformable curve and surface finite-elements for free-form shape design." Proceedings of the 18th annual conference on Computer graphics and interactive techniques: 257-266. Celniker, G. and W. Welch (1992). "Linear constraints for deformable non-uniform B-spline surfaces." Proceedings of the 1992 symposium on Interactive 3D graphics: 165-170. Chang, B. (2002). Real-time Impulse-based Simulation of Rigid Body Systems for Haptic Display, Northwestern University. Chen, X., N. Xu, et al. (2005). "A Virtual Environment for Collaborative Assembly." Proceedings of the Second International Conference on Embedded Software and Systems (ICESS' 05). Chen, Y. H., Y. Z. Wang, et al. (2004). "Towards a haptic virtual coordinate measuring machine." International Journal of Machine Tools and Manufacture 44(10): 1009-1017. Chen, Y. H. and Z. Y. Yang (2007). "Haptic modeling as an integrating tool for product development." The International Journal of Advanced Manufacturing Technology 33(7): 635-642. Chui, C. K. and M. J. Lai (2000). "Filling polygonal holes using C-1 cubic triangular spline patches." Computer Aided Geometric Design 17(4): 297-307. Cimerman, M. and A. Kristan (2007). "Preoperative planning in pelvic and acetabular surgery: The value of advanced computerised planning modules." Injury 38(4): 442449. Citak, M., M. J. Gardner, et al. (2007). "Virtual 3D planning of acetabular fracture reduction." J Orthop Res. Colgate, J. E., M. C. Stanley, et al. (1994). "Issues in the haptic display of tool use." Proceedings of the ASME Haptic Interfaces for Virtual Environment and Teleoperator Systems: 140-144. Crison, F., A. Lecuyer, et al. (2005). "Virtual Technical Trainer: Learning How to Use Milling Machines with Multi-Sensory Feedback in Virtual Reality." proceedings of the IEEE International Conference on Virtual Reality (VR' 05). 236

Curless, B. and M. Levoy (1996). "A volumetric method for building complex models from range images." Proceedings of the 23rd annual conference on Computer graphics and interactive techniques: 303-312. Dachille, F., H. Qin, et al. (2001). "A novel haptics-based interface and sculpting system for physics-based geometric design." Computer-Aided Design 33(5): 403-420. Davis, J., S. R. Marschner, et al. (2002). "Filling holes in complex surfaces using volumetric diffusion." 3D Data Processing Visualization and Transmission, 2002. Proceedings. First International Symposium on: 428-438. Denavit, J. and R. S. Hartenberg (1955). "A kinematic notation for lower-pair mechanisms based on matrices." Journal of Applied Mechanics 22(2): 215-221. Dimension, F. "Delta and Omega." http://www.forcedimension.com/products. Diolaiti, N. and C. Melchiorri (2002). "Teleoperation of a mobile robot through haptic feedback." Haptic Virtual Environments and Their Applications, IEEE International Workshop 2002 HAVE: 67-72. EASY-ROB "EASY-ROB." http://www.easy-rob.com/en/easy-rob.html. Ebrahimi, M. and R. Whalley (1998). "Machine tool syntheses in virtual machining." International Journal of Materials and Product Technology(Switzerland) 13(3): 301312. Eriksson, M., H. Flemmer, et al. (2005). "A Haptic and Virtual Reality Skull Bone Surgery Simulator." World Haptics Conference. Faber, P. and B. Fisher (2001). "A Buyer' s Guide to Euclidean Elliptical Cylindrical and Conical Surface Fitting." Proc. British Machine Vision Conference BMVC01, Manchester: 521-530. Fang, X. D., S. Luo, et al. (1998). "Virtual machining lab for knowledge learning and skills training." Computer Applications in Engineering Education 6(2): 89-97. Fowler, B. and R. Bartels (1993). "Constraint-based curve manipulation." IEEE Computer Graphics and Applications 13(5): 43-49. Frank Dachille, I. X., H. Qin, et al. (1999). "Haptic Sculpting of Dynamic Surfaces." the Proceedings of Symposium on Interactive 3D Graphics: 103-110. Galeano, D. and S. Payandeh (2005). "Artificial and natural force constraints in haptic-aided path planning." IEEE Int. Workshop on Haptic Audio Visual Environments and their Applications. Goertz, R. and R. Thompson (1954). "Electronically controlled manipulator." Nucleonics 12(11): 46-47. 237

Gomes de, S. A. and G. Zachmann (1999). "Virtual reality as a tool for verification of assembly and maintenance processes." Computers & Graphics 23(3): 389-403. Gorczyca, F. E. (1987). Application of Metal Cutting Theory, Industrial Press Inc. Gregory, A., A. Mascarenhas, et al. (2000). "Six degree-of-freedom haptic display of polygonal models." Proceedings of the conference on Visualization' 00: 139-146. Gupta, K. and A. P. Pobil (1998). Practical Motion Planning in Robotics: Current Approaches and Future Directions, John Wiley & Sons, Inc. New York, NY, USA. Haption "Virtuose 6D Desktop " http://www.haption.com/eng/html/materiel.php. Hayward, V., O. R. Astley, et al. (2004). "Haptic Interfaces and Devices." Sensor Review 24(1): 16-29. Hazan, E. J. and L. Joskowicz (2003). "Computer-assisted image-guided intramedullary nailing of femoral shaft fractures." Techniques in Orthopaedics 18(2): 191-200. Herndon, K. P., A. van Dam, et al. (1994). "The challenges of 3D interaction: a CHI' 94 workshop." ACM SIGCHI Bulletin 26(4): 36-43. Herndon, K. P., R. C. Zeleznik, et al. (1992). "Interactive shadows." Proceedings of the 5th annual ACM symposium on User interface software and technology: 1-6. Hitchcock, M. F., A. D. Baker, et al. (1994). "The role of hybrid systems theory in virtual manufacturing." Computer-Aided Control System Design, 1994. Proceedings., IEEE/IFAC Joint Symposium on: 345-350. Hobkirk, J. A. and K. Rusiniak (1977). "Investigation of variable factors in drilling bone." J Oral Surg 35(12): 968-973. Hsu, D., L. E. Kavraki, et al. (1998). "On Finding Narrow Passages with Probabilistic Roadmap Planners." Robotics: The Algorithmic Perspective: the Third Workshop on the Algorithmic Foundations of Robotics. Huxley, H. (1987). "Double vision reveals the structure of muscle." New Scientist 1561 pp. 42–45. Hwang, Y. K. and N. Ahuja (1992). "Gross motion planning- A survey." ACM Computing Surveys (CSUR) 24(3): 219-291. Immersion "Immersion Products " http://www.immersion.com/corporate/products/. Iwata, K., M. Onosato, et al. (1995). "A Modelling and Simulation Architecture for Virtual Manufacturing Systems." CIRP Annals-Manufacturing Technology 44(1): 399-402.

238

Jayaram, S., H. I. Connacher, et al. (1997). "Virtual assembly using virtual reality techniques." Computer-Aided Design 29(8): 575-584. Jayaram, S., U. Jayaram, et al. (1999). "VADE: A Virtual Assembly Design Environment." Jeng, E. and Z. Xiang (1996). "Moving cursor plane for interactive scalping." ACM Trans Graphics 15: 211-222. Joskowicz, L., C. Milgrom, et al. (1998). "FRACAS: a system for computer-aided image-guided long bone fracture surgery." Comput Aided Surg 3(6): 271-88. Jun, Y. (2005). "A piecewise hole filling algorithm in reverse engineering." Computer-Aided Design 37(2): 263-270. Kang, H., Y. S. Park, et al. (2004). "Visually and Haptically Augmented Teleoperation in D&D Tasks Using Virtual Fixtures." 10th Robotics & Remote Systems Mtg. Proceedings: 466-471. Kavraki, L. E., P. Svestka, et al. (1996). "Probabilistic roadmaps for path planning in high-dimensionalconfiguration spaces." Robotics and Automation, IEEE Transactions on 12(4): 566-580. Kim, J., H. Kim, et al. (2004). "Transatlantic touch: a study of haptic collaboration over long distance." Presence: Teleoperators and Virtual Environments 13(3): 328337. Kitware, I. "The Visualization ToolKit (VTK)." http://www.vtk.org/. Koga, Y., K. Kondo, et al. (1994). "Planning motions with intentions." Proceedings of the 21st annual conference on Computer graphics and interactive techniques: 395-408. Kosilova, A. G. and e. al. (1985). "Manufacturing Handbook." 2: 347-356. Kuehne, R. P. and J. Oliver (1995). "A virtual environment for interactive assembly planning and evaluation." Design Engineering Technical Conferences 2: 863-267. Larsen, E., S. Gottschalk, et al. (2000). "Fast proximity queries with swept sphere volumes." Proc. of IEEE Conf. on Robotics and Automation. Latombe, J. C. (1991). Robot Motion Planning, Kluwer Academic Publishers. Lee, S., G. Sukhatme, et al. (2002). "Haptic Teleoperation of a Mobile Robot: A User Study." Proceedings of the IEEE/RSJ Int' l Conference on Intelligent Robots and Systems: 2867-2874. Lin, E., I. Minis, et al. (1994). "Virtual Manufacturing User Workshop." Proceedings of the Virtual Manufacturing User Workshop, Lawrence Associates Inc.

239

Lin, W. S., B. Y. Lee, et al. (2001). "Modeling the surface roughness and cutting force for turning." Journal of Materials Processing Tech. 108(3): 286-293. Lin, Y. Z. and Y. L. Shen (2004). "Enhanced virtual machining for sculptured surfaces by integrating machine tool error models into NC machining simulation." International Journal of Machine Tools & Manufacture 44(1): 79-86. Loop, C. T. (1987). Smooth subdivision surfaces based on triangles, Dept. of Mathematics, University of Utah. Lorensen, W. E. and H. E. Cline (1987). "Marching cubes: A high resolution 3D surface construction algorithm." Proceedings of the 14th annual conference on Computer graphics and interactive techniques: 163-169. Lozano-Perez, T. and M. A. Wesley (1979). "An algorithm for planning collision-free paths among polyhedral obstacles." Communications of the ACM 22(10): 560-570. Mark, W. R., S. C. Randolph, et al. (1996). "Adding force feedback to graphics systems: issues and solutions." Proceedings of the 23rd annual conference on Computer graphics and interactive techniques: 447-452. materialise "Mimics." Mimics.html.

http://www.materialise.com/materialise/view/en/92458-

MathWorks, T. "MATLAB http://www.mathworks.com/access/helpdesk/help/techdoc/.

Documentations."

Mayr, H. and J. Heinzelreiter (1991). "Modeling and Simulation of the Robotics/NC Machining Process Using a Spatial Enumeration Representation." Fifth International Conference on Robots in Unstructured environment 42: 1594-1597. McDonnell, K. T., H. Qin, et al. (2001). "Virtual clay: a real-time sculpting system with haptic toolkits." Proceedings of the 2001 symposium on Interactive 3D graphics: 179-190. McLaughlin, M. L., G. Sukhatme, et al. (2001). Touch in Virtual Environments: Haptics and the Design of Interactive Systems, Prentice Hall PTR Upper Saddle River, NJ, USA. McNeely, W. A., K. D. Puterbaugh, et al. (1999). "Six degree-of-freedom haptic rendering using voxel sampling." Proceedings of the 26th annual conference on Computer graphics and interactive techniques: 401-408. Mellet-D' Huart, D., G. Michela, et al. (2004). "An application to training in the field of metal machining as a result of research-industry collaboration." proceedings of the Virtual Reality Conference (VRIC). Mitsi, S., K.-D. Bouzakis, et al. (2005). "Off-line programming of an industrial robot for manufacturing." Int J Adv Manuf Technol 26: 262-267. 240

Mujber, T. S., T. Szecsi, et al. (2004). "Virtual reality applications in manufacturing process simulation." Journal of Materials Processing Tech. 155: 1834-1838. Mujber, T. S., T. Szecsi, et al. (2004). "Virtual reality applications in manufacturing process simulation." Journal of Materials Processing Technology 155-56: 1834-1838. Mukherjee, S., M. Rendsburg, et al. (2005). "Surgeon-instructed, Image-guided and Robot-assisted Lone Bone Fractures Reduction." 1st International Conference on Sensing Technology. Nelson, D. D., D. E. Johnson, et al. (2005). "Haptic rendering of surface-to-surface sculpted model interaction." International Conference on Computer Graphics and Interactive Techniques. Novint "Falcon." http://home.novint.com/products/novint_falcon.php. Ortega, M., S. Redon, et al. (2006). "A Six Degree-of-Freedom God-Object Method for Haptic Display of Rigid Bodies." IEEE International Conference on Virtual Reality. Otaduy, M. A. and M. C. Lin (2005). "Sensation preserving simplification for haptic rendering." International Conference on Computer Graphics and Interactive Techniques. Otaduy, M. A. and M. C. Lin (2005). "Stable and responsive six-degree-of-freedom haptic manipulation using implicit integration." Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. WHC 2005. First Joint Eurohaptics Conference and Symposium on: 247-256. Otaduy, M. A. and M. C. Lin (2006). High Fidelity Haptic Rendering, Morgan & Claypool Publishers. Peng, X., X. Chi, et al. (2003). "Bone surgery simulation with virtual reality." ASME DETC2003/CIE, Chicago USA, September: 2-6. Pere, E., N. Langrana, et al. (1996). "Virtual mechanical assembly on a PC-based system." Proceeding of The 1996 ASME Design Engineering Technical Conferences and Computers in Engineering Conference, August, Irvine, California. Piegl, L. (1989). "Modifying the shape of rational B-splines. part 1: curves." Computer-Aided Design 21(10): 509-518. Piegl, L. A. and W. Tiller (1997). The Nurbs Book, Springer. Quanser "5 DOF Haptic Wand System." http://www.quanser.com/NET/Industrial/Systems_and_Products/Sys_5Dof_Haptic.as px.

241

Rayevskaya, V. and L. L. Schumaker (2005). "Multi-sided macro-element spaces based on Clough-Tocher triangle splits with applications to hole filling." Computer Aided Geometric Design 22(1): 57-79. Reif, J. H. (1979). "Complexity of the mover' s problem and generalizations." Proceedings of the 20th IEEE Symposium on Foundations of Computer Science: 421427. Reinkensmeyer, D., C. Painter, et al. (2000). "An Internet-Based, Force-Feedback Rehabilitation System for Arm Movement after Brain Injury." Proceedings of CSUN' s 15th Annual International Conference," Technology and Persons with Disabilities," March: 20-25. Roboop, A. (2002). "Robotics Object-Oriented Package in C++, version 1.13, Documentation, R. Gourdeau." Ecole Polytechnique de Montreal. Rumelhart, D. E., G. E. Hinton, et al. (1986). Learning internal representations by error propagation, MIT Press Cambridge, MA, USA. Ruspini, D. C., K. Kolarov, et al. (1997). "The haptic display of complex graphical environments." Proceedings of the 24th annual conference on Computer graphics and interactive techniques: 345-352. S.C., D. and T. E. (2002). "Coding of 3D virtual objects with NURB." Signal Processing 82: 1581-1593. Sachs, E., A. Roberts, et al. (1991). "3-Draw - a Tool for Designing 3D Shapes." Ieee Computer Graphics and Applications 11(6): 18-26. Sanchez, G. and J. C. Latombe (2001). "A single-query bi-directional probabilistic roadmap planner with lazy collision checking." Int. Symp. Robotics Research. Scheuering, M., C. Rezk-Salama, et al. (2001). "Interactive Repositioning of Bone Fracture Segments." Proceedings of the Vision Modeling and Visualization Conference 2001 table of contents: 499-506. Sharir, M. (1997). Algorithmic motion planning, Handbook of discrete and computational geometry, CRC Press, Inc., Boca Raton, FL. Sheridan, T. B. (1992). Telerobotics, Automation, and Human Supervisory Control, MIT Press. Sherman, W. R. and A. B. Craig (2003). Understanding Virtual Reality: Interface, Application, and Design, Morgan Kaufmann. Shukla, C., M. Vazquez, et al. (1996). "Virtual manufacturing: An overview." Computers & Industrial Engineering 31(1-2): 79-82. Smith, R. (2004). "Open Dynamics Engine v0. 5 User Guide." Retrieved December 5: 2005. 242

Technologies, S. (2004). "3D Touch™ SDK Openhaptics™ Toolkit Version 1.0 Programmer’s Guide." Thon, S., G. Gesquiere, et al. (2004). "A Low Cost Antialiased Space Filled Voxelization Of Polygonal Objects." GraphiCon 2004: 71-78. Turro, N. and O. Khatib (2001). "Haptically Augmented Teleoperation." Experimental Robotics VII. Van der Linde, R. Q., P. Lammertse, et al. (2002). "The HapticMaster, a new highperformance haptic interface." Eurohaptics 2002, pp.1-5. Varady, T., R. R. Martin, et al. (1997). "Reverse engineering of geometric models An introduction." Computer-Aided Design 29(4): 255-268. Wang, T. Y., G. F. Wang, et al. (2002). "Construction of a realistic scene in virtual turning based on a global illumination model and chip simulation." Journal of Materials Processing Technology 129(1-3): 524-528. Westphal, R., T. Gosling, et al. (2006). "3D Robot Assisted Fracture Reduction." Proceedings of 10th International Symposium on Experimental Robotics 2006 Wiet, G. J., D. Stredney, et al. (2002). "Virtual temporal bone dissection: An interactive surgical simulator." Otolaryngology-Head and Neck Surgery 127(1): 79-83. Wiggins, K. L. and S. Malkin (1976). "Drilling of bone." J Biomech 9(9): 553-559. Wikipedia (2008). "http://en.wikipedia.org/wiki/Reverse_engineering." Winkelbach, S., R. Westphal, et al. (2003). "Pose Estimation of Cylindrical Fragments for Semi-Automatic Bone Fracture Reduction." Pattern Recognition (DAGM 2003), Lecture Notes in Computer Science 2781: 3-540. Yang, Z. and Y. Chen (2005). "A reverse engineering method based on haptic volume removing." Computer-Aided Design 37(1): 45-54. Yoshikawa, T. (1985). "Manipulability of Robotic Mechanisms." The International Journal of Robotics Research 4(2): 3. Zhengyi, Y. and C. Yonghua (2003). "Haptic Rendering of Milling." Proceedings of Eurohaptics Conference,Dublin, Ireland: pp. 206-217. Zhu, W. and Y. S. Lee (2004). "Five-axis pencil-cut planning and virtual prototyping with 5-DOF haptic interface." Computer-Aided Design 36(13): 1295-1307. Zhuozhi, L., L. Shengyi, et al. (1998). "Modeling and simulation of virtual machining process." Journal of National University of Defense Technolog(20): 31-34.

243

Zilles, C. B. and J. K. Salisbury (1995). "A constraint-based god-object method for haptic display." Proc. IEE/RSJ International Conference on Intelligent Robots and Systems, Human Robot Interaction, and Cooperative Robots 3: 146-151. Zorriassatine, F., C. Wykes, et al. (2003). "A survey of virtual prototyping techniques for mechanical product development." Proceedings of the Institution of Mechanical Engineers Part B-Journal of Engineering Manufacture 217(4): 513-530.

244