Optimization of Physical Distribution Problem in

0 downloads 0 Views 1MB Size Report
maintain a satisfactory service level while at the same time maximizing net profit. ... (iv) Graphs – These show the changes of the total demand, total stock level available at each .... Dornier, P-P., Ernst, R., Fender, M. and Kouvelis, P. (1998).
Industrial Engineering Research An International Journal of IE Theory and Application

Industrial Engineering Research is seeking original manuscripts reporting new developments in theory, application and results of empirical research in Industrial Engineering. The journal is devoted to promoting the development of Industrial Engineering knowledge by publishing quality articles from all countries. Interested contributors should submit original manuscripts in English (not published previously or currently under consideration by other journals) to the Editor-in-Chief. All submitted manuscripts will be peer reviewed based on their originality, accuracy, thoroughness, usefulness, and quality.

Editorial Board EDITOR-IN-CHIEF:

C.Y. Tang, The Hong Kong Polytechnic University

ASSOCIATE EDITORS:

Alan H.S. Chan, City University of Hong Kong Tommy K.L. Choy, The Hong Kong Polytechnic University George Q. Huang, The University of Hong Kong W.H. Ip, The Hong Kong Polytechnic University H.W. Law, City University of Hong Kong Richard H.Y. So, The Hong Kong University of Science and Technology Fugee Tsung, The Hong Kong University of Science and Technology Benjamin Yen, The University of Hong Kong

SECRETARY:

Gary C.P. Tsui, The Hong Kong Polytechnic University

INTERNATIONAL EDITORIAL BOARD Jane Algee, The Institute of Industrial Engineers Hong Chen, University of British Columbia Edwin T. C. Cheng, The HK Polytechnic University Min K. Chung, Pohang University of Science & Tech. K. K. Hon, University of Liverpool Keebom Kang, Naval Postgraduate School Waldemar Karwowski, University of Louisville W. S. Lau, Vocational Training Council W. B. Lee, The Hong Kong Polytechnic University Edmond K.K. Lo, Vocational Training Council Y.W. Mai, The University of Sydney K. L. Mak, The University of Hong Kong Katta Murty, University of Michigan M. Nagamachi, Kure National Institute of Technology S. Nanthavanij, Thammasat University

Andrew Y. C. Nee, National University of Singapore Hamid R. Parsaei, University of Louisville K. V. Patri, City University of Hong Kong B. Porter, The University of Hong Kong E. S. Qi, Tianjin University Gavriel Salvendy, Purdue University Bruce W. Schmeiser, Purdue University Elias Siores, The Ind. Research Institute Swinburne, Australia Mitchell M. Tseng, The HK University of Science & Technology S. K. Tso, City University of Hong Kong M. J. Wang, National Tsing Hua University D. J. Williams, Loughborough University Chris H. C. Wong, The Hong Kong Polytechnic University S. Zhang, Tongji University Bernhard Zimolong, Ruhr-Universitaet at Bochum

The Industrial Engineering Research is an academic journal normally published twice a year by the Institute of Industrial Engineers (Hong Kong) Ltd. G.P.O. Box 6635, Hong Kong. Copyright © by The Institute of Industrial Engineers (Hong Kong) Ltd. All rights reserved. Authors are themselves responsible for obtaining permission to reproduce copyright material from other sources. The Institute of Industrial Engineers (Hong Kong) Ltd. is not responsible for the views expressed by contributors. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form of by any means, without the written permission of the Institute.

Industrial Engineering Research An International Journal of IE Theory and Application Volume 4, Number 2, September 2007 CONTENTS

62

Design of a Simulation Game in Teaching Supply Chain Dynamics Q. Wang, Edmond L.H. Choy, K.L. Choy and Steve Frankland

71

Optimization of Physical Distribution Problem in Logistics Management William Ho, Ping Ji and Pavel Albores

83

Parameterized Finite Element Method for Analysis of Heterogeneous Solids Y. Q. Guo, C. Y. Tang and B. Gao

93

Modification of Nano-SiO2 Particles with Silane Agent in Supercritical Carbon Dioxide D. Stojanović, G.D. Vuković, A.M. Orlović, P.S. Uskoković, R. Aleksić N. Bibić, and M.D. Dramićanin

103

Reliability Comparison of Rigid Flex Printed Circuit using Various Materials and Design Build-ups S.Q. Huang and K.C. Yung

113

A Data Processing Algorithm for Digital 3D Motion Analysis C.P. Tsui, C.Y. Tang and Y.M. Wong

A Research Journal Published by the Institute of Industrial Engineers (Hong Kong)

Industrial Engineering Research, Vol. 4 (2) 62-70 (2007) © 2007 Institute of Industrial Engineers (Hong Kong)

ISSN 1027-2208

Design of a Simulation Game in Teaching Supply Chain Dynamics Q. Wang, Edmond L.H. Choy, K.L. Choy and Steve Frankland Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China.

ABSTRACT The introductory concepts of supply chain management has been a fundamental topic in teaching logistics courses. However, its teaching methodology is generally plain, straightforward and often not interesting for students. In this paper, a simulation game called Supply Chain Dynamics Simulation Game (SCDG) is developed to aid teaching so that students can grasp knowledge through game manipulation. A specific teaching methodology is developed based on the game to guide the students through the supply chain concepts and its related theories. Student feedback is collected by conducting a survey for further improvement on the teaching method.

Keywords:

Supply Chain Management, Bullwhip Effect, Teaching, Education, Simulation Game

1. INTRODUCTION Logistics is one of the four pillars of Hong Kong’s economy. Moreover, it has established its role as a main port connecting Asia as well as the rest of the world, Hong Kong has been actively improving its logistics and supply chain operations in order to better fulfill the various needs of today’s ever-changing environment. Many universities in Hong Kong have established courses that major in, or are related to logistics, so as to train and educate specialists in logistics and supply chain operations. Such courses offered range from basic to advanced ones.

62

Wang, Choy, Choy and Frankland

As a starting point to logistics, the introduction of supply chain management is a fundamental topic among all courses. Obtaining a deep understanding of this topic will provide a solid basis for future learning of more advanced topics. At the moment, most teaching of supply chain management is conducted through straightforward lectures (Lambert et al. 1997; Handfield & Nichols 1998) or case studies (Taylor 1997; Dornier et al. 1998). Teaching through simulation has been a long-term developing but not widely adopted approach. The Beer game is a traditional game (Sterman 1989,1992) and any variations of which have been developed for teaching purposes (Jacobs 2000; Chen & Samroengraja 2000; Tiger et al. 2006). Other simulation games for teaching have also been developed (Mehring 2000; Campbell et al. 2000). However, these games developed are targeted for postgraduates or industry executives.

At the present time, there is no effective teaching method suitable for undergraduates especially entry-level students to teach introductory supply chain management. The main objective of this research is to develop a teaching method that is interesting, visualized and easy to understand, so that it can be used in undergraduate classes for students to manipulate a supply chain, devise optimal inventory policies and learn relevant material. The motivation comes from the lack of a suitable way to deliver such material through traditional lectures and pencil and paper exercises. Accordingly, a simulation game is developed. The game; Supply Chain Dynamics Simulation Game (SCDG), provides a basic tool for learning through game manipulation. Figure 1 shows the difference between using the traditional method and the proposed method. Moreover, the game is also used to demonstrate the “Bullwhip Effect” that is inherent in a supply chain (Forrester 1961; Dornier et al. 1998). In so doing, learning becomes more vivid and effective through practicing. Students have a sense of enjoyment so that the materials are more easily grasped.

Figure 1 – Comparison between Traditional method and Proposed method

63

Design of a Simulation Game in Teaching Supply Chain Dynamics

2. THE DESIGN OF THE COURSE CONTENTS The primary courses to be taught is “Simulation of Industrial and Business Processes” section “Supply Chain Dynamics” and “Production Logistics” section “Logistics and Distribution”. In both course sections, students are supposed to establish a basic concept of supply chain, managing supply chain through effective inventory management applying knowledge learnt, and gain new concept on “Bullwhip effect”. To enable students establish the concept of a supply chain, and how it functions, a generic supply chain is designed for students to manipulate. As shown in Figure 2, the scenario involves three parties: Supplier, Warehouse and Retailer. Each forms an important entity along the supply chain. Inventory is kept both at retailers and warehouse. The retailer orders from the warehouse for inventory replenishment, while the warehouse replenishes its inventory through ordering from the supplier. The supplier manufactures the products. Customer demand fluctuates, and suitable inventory levels are necessary at both warehouse and retailer to maintain a satisfactory service level while at the same time maximizing net profit. Combined with appropriate forecasting techniques of customer demand (retail orders), a specific inventory policy should be designed at the retailers (warehouse) so that the target is met. “Bullwhip Effect” will be taught to the students as a common phenomenon in the supply chain associated with inventory replenishment. As the courses are offered to undergraduates, the teaching should be interesting and easy to understand.

Supplier Figure 2 - Generic Supply Chain Scenario

3. THE DESIGN OF THE SIMULATION GAME – SCDG With the above proposed scenario, a simulation game, the SCDG, is designed to achieve our teaching target. Students will learn the supply chain through game manipulation and devise their inventory policies through experimental work. “Bullwhip Effect” is taught to the students based on their observations during manipulation. The simulation game is designed following the steps below.

Step 1: Define the case scenario. The background of the whole story is a corporate company, Style Co. Ltd, which sells T-shirts and makes use of a central warehouse with three retailers for the distribution and sales of T-shirts. Each retailer makes a request to the warehouse when ordering stock for running the business. It is also necessary for the warehouse to place orders with their supplier when necessary

64

Wang, Choy, Choy and Frankland

so that they can always provide stock to the retailers. The supply chain in this scenario can be local, if all the three parties are located in the same region; or global, if one of the parties, say the supplier, is located globally. In either case, the manipulation of the supply chain in the view of its framework is the same. However, the costs associated include the transportation and inventory costs and the delivery lead time differ. Step 2: Define the role of students. They act as the Purchasing Manager of the company and decide upon an inventory policy (when and how much to order by both the warehouse and each of the three retailers) for a period of 100 days in order to minimize the total operating cost and maximize net profit. The goal is achieved through hands-on experiment while manipulating the supply chain in the given scenario. Step 3: Assign the data to the scenario. This is assigned to the case scenario as shown in Table 1. Warehouse

Retailer

Purchasing Cost (per unit)

$25

$39

Ordering Cost (per order)

$150

$60

Holding Cost (per unit per year)

$2.15

$3.88

14 days

3 days

1000 units

200 units

-

Uniformly Distributed

Lead time for placing an order Initial Stock Daily Demand

Table 1 - Database of the SCDG Step 4: Develop the simulation game. To make the game more interesting, the SCDG is developed ® using Macromedia Flash . The interface of the game is shown in Figure 3.

Figure 3 - Interface of the Simulation Game

65

Design of a Simulation Game in Teaching Supply Chain Dynamics

The interface is divided into four regions with the headings of Supply Chain Simulator, Cost Analysis, Descriptions, and Graphs: (i)

Supply Chain Simulator – This shows separate entities and the flow of process in a supply chain. Three parties are involved, supplier, warehouse, and retailers. Students are required to run the game for 100 days, and each day they are required to make decisions. Daily demands for the retailers are randomly generated by the simulator. Stock levels at the retailers are updated automatically. For placing the order, the quantity is typed in the corresponding textbox. When an order is placed by one of the retailers to the warehouse – as the order quantity is entered into the textbox, a truck carrying the goods of the quantity ordered will appear. After the lead time of three days, the retailer stock is replenished. Similar to placing an order from the warehouse to the supplier but the lead time is 14 days. If either the warehouse or the retailer is out of stock, a blinking warning signal will appear to remind the corresponding party to replenish stock.

(ii) Cost Analysis – This shows the results after each day, including the original results and the updated results (after clicking the “OK” button). (iii) Descriptions – These show the detailed calculations of Quantity Sold, Sales Value, Ordering Cost, Holding Cost, Gross Profit, Cost of Goods Sold and Net Profit (iv) Graphs – These show the changes of the total demand, total stock level available at each retailer, the stock level at the warehouse and the net profit over the simulation period of 100 days. The graphs will assist the students’ observations on how the “Bullwhip Effect” displays in the designed supply chain.

4. TRIAL RUNNING THE SCDG IN UNDERGRADUATE CLASSES The SCDG was trial run on two classes of undergraduate students, one class of Year 1 students studying “Production Logistics” and the other class of Year 3 students studying “Simulation of Industrial & Business Processes”, with classes of size 61 and 30 students respectively. The trial was conducted through two 3-hour classes. The procedure adopted for the teaching process is shown in Figure 4.

66

Wang, Choy, Choy and Frankland

Figure 4 – Teaching Procedure based on the Simulation Game Using the above teaching steps, students are devoted to achieving the profit maximization target by repeatedly trying to devise better inventory policies. During the manipulation process, they have applied their knowledge and analyze the data obtained. Also, based on their observations, a new knowledge for them – the “Bullwhip Effect” is taught. After their optimal inventory policy is obtained, they organize and present their findings. Therefore, not only applying and gaining knowledge, their analytical and presentation skills are improved. A deeper understanding of the topic is achieved than would otherwise be the case. Furthermore, if other teaching methods are adopted, only part of the course contents discussed in Section 2 is able to be delivered.

5. DISCUSSION AND ANALYSIS A questionnaire has been given to students to conduct a survey on this particular teaching method at the end of the class. Seven questions were used in order to see whether the SCDG is effective and any improvements can be made to the game. The questions are shown as below: 1. Did you find the case study interesting and did you enjoy it? 2. Did you find that working on a realistic problem made it seem more relevant to your studies? 3. Did working in a group mean that you learned from each other? 4. Did you understand the concept of the topic better by using this computer simulation as compared with if it had been given by lecture and a pencil and paper exercise? 5. Considering the material you have learnt, do you think you have a deep understanding of the topic? 6. By considering this case study as an example, would you like to see more similar examples in the future? 7. Did you find the software user-friendly? If there are any areas that you think can be improved, please would you describe them below.

67

Design of a Simulation Game in Teaching Supply Chain Dynamics

1 – not at all 2 3 4 5 – very much

Q1

Q2

Q3

Q4

Q5

Q6

Q7

0.0% 4.7% 29.4% 58.8%

0.0% 7.1% 23.5% 60.0%

2.4% 7.1% 22.4% 50.6%

0.0% 5.9% 22.4% 49.4%

0.0% 7.1% 35.3% 48.2%

1.2% 2.4% 28.2% 49.4%

0.0% 12.9% 31.7% 37.7%

7.1%

9.4%

17.7%

22.4%

9.4%

18.8%

17.7%

Table 2 – Results of Student Feedback from the Two groups Table 2 shows their opinions towards the simulation package. the vast majority of students believe that the simulation package was able to provide them to have a better understanding of the topic’s concepts.

Figure 5 – Implications on the results These results show it is more preferable to have interactive learning tools. Figure 5 shows the implication on the result and there is no doubt that visual aided features play an important role in the simulation package as it catches their attention more and enhances their learning progress. Q1 Q1 Q2 Q3 Q4 Q5 Q6 Q7

Q2 1 0.3929 0.3658 0.5741 0.3989 0.5624 0.3446

Q3 -----1 0.3332 0.2797 0.2437 0.3104 0.1540

Q4 ----------1 0.2755 0.2263 0.3503 0.1595

Q5 ---------------1 0.5911 0.5681 0.4044

Q6 --------------------1 0.4578 0.3188

Q7 -------------------------1 0.3162

------------------------------1

Table 3 - Correlation Matrix among the questions It can be seen from the correlation matrix (Table 3) that the relationship between Q4 and Q5 are highly correlated (R = 0.5911), implies that most students have developed a deeper understanding on the topic after experiencing the simulator. On the other hand, the result shows that students would have a better understanding if they are interested in the topic (obviously), with the correlation equals to 0.5741 among Q1 and Q4. These results indicate the usefulness of the simulator. Test on the Reliability of the Experiment H0: The experiment is reliable H1: The experiment is not reliable Rule of Thumb on Cronbach’s Alpha: The experiment is reliable if greater than 0.7.

Cronbach' s Alpha (α ) =

85 × 0.3856 = 0.9816 (1 + (85 − 1) × 0.3856)

68

Wang, Choy, Choy and Frankland

H0 is accepted at the 95% confidence interval,. It implies that the resiults are reliable and the application of the simulator is effective for teaching.

6. CONCLUSION In this paper, a special method using an interactive simulation game, the SCDG, for teaching supply chain dynamics is presented. The concepts are difficult to present using traditional methods. The proposed teaching method provides a solution to this problem, and by running the SCDG, the concepts become easier for students to grasp. The trial was run using two classes of students. It is observed that students are able to utilize their knowledge and abilities to devise different inventory policies for achieving a predefined target. The questionnaire was distributed to the students and according to the results analyzed, the proposed method is proved to be very effective and a real contribution to learning and teaching of supply chain dynamics. The game and teaching method will continue to be adopted for teaching of relevant supply chain courses in the future and more such simulation games are to be developed.

ACKNOWLEDGEMENT The authors wish to thank the Learning and Teaching Committee of The Hong Kong Polytechnic University for supporting this project (Project Code: 491H).

REFERENCES Campbell, A., Goentzel, J., and Savelsbergh, M. (2000).

“Experiences with the use of supply chain

management software in education”, Production and Operations Management, Vol. 9, Iss. 1, pp. 66-80. Chen, F. and Samroengraja, R. (2000).

“The stationary beer game”, Production and Operations

Management, Vol. 9, Iss. 1, pp. 19-30. Dornier, P-P., Ernst, R., Fender, M. and Kouvelis, P. (1998).

Global operations and logistics – text

and cases, John Wiley & Sons, New York. Forrester J. (1961). Industrial dynamics, Productively Press, Cambridge Mass. Handfield, R.B. and Nichols, E.Z. (1998).

Introduction to Supply chain Management, Prentice Hall

Press, New York. Jacobs, R. (2000).

“Playing the beer distribution game over the internet”, Production and Operations

69

Design of a Simulation Game in Teaching Supply Chain Dynamics

Management, Vol. 9, Iss. 1, pp. 31-39. Lambert, D.M., Stock, J.R., Ellram, L.M., and Stockdale, J. (1997). Fundamentals of Logistics Management, McGraw Hill Text, New York. Mehring, J.S. (2000).

“A practical setting for experiential learning about supply chains: Siemens

brief case game supply chain simulator”, Production and Operations Management, Vol. 9, Iss. 1, pp. 56-65. Sterman, J.D. (1989).

“Modeling managerial behavior: misperceptions of feedback in a dynamic

decision making experiment”, Management Science, Vol. 35, No. 3, pp. 321-339. Sterman, J.D. (1992).

“Teaching takes off: flight simulators for management education”, OR/MS

Today, October, pp. 40-43. Taylor, D. (1997).

Global cases in Logistics and Supply Chain Management, International Thomson

Business Press, New York. Tiger, A.A., Benco, D.C., and Fogle, C. (2006). “Teaching the importance of information, supply chain management, and modeling: the spreadsheet beer-like game”, Issues in Information Systems, Vol. VII, No. 1, pp. 108-113.

70

Industrial Engineering Research, Vol. 4 (2) 71-82 (2007) © 2007 Institute of Industrial Engineers (Hong Kong)

ISSN 1027-2208

Optimization of Physical Distribution Problem in Logistics Management William Ho1,*, Ping Ji 2 and Pavel Albores 1 1

Operations and Information Management Group, Aston Business School, Aston University, Birmingham B4 7ET, United Kingdom. *E-mail: [email protected] 2 Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong

ABSTRACT Physical distribution problem plays an important role in contemporary logistics management. Both satisfaction level of customer and competitiveness of company can be enhanced if the distribution problem is solved optimally. The multi-depot vehicle routing problem (MDVRP) belongs to a practical logistics distribution problem, which consists of three critical issues: customer assignment, customer routing, and vehicle sequencing. According to the literatures, the solution approaches for the MDVRP are not satisfactory because some unrealistic assumptions were made on the first sub-problem of the MDVRP, or the customer assignment problem. To refine the approaches, the focus of this paper is confined to this problem only. This paper formulates the customer assignment problem as a minimax-type integer linear programming model with the objective of minimizing the cycle time of the depots where setup times are explicitly considered. Since the model is proven to be NP-complete, a genetic algorithm is developed for solving the problem. The efficiency and effectiveness of the genetic algorithm are illustrated by a numerical example.

Keywords: Logistics management; Physical distribution problem; Mathematical modeling; Genetic algorithm.

71

Ho, Ji and Albores

1.

INTRODUCTION The single-depot vehicle routing problem or simply VRP has been studied extensively because it

is found to be widely applicable to many real-world situations, including the logistics distribution problem. Although it has attracted so much attention, the VRP is not suitable for some cases where a company has more than one depot. Due to this reason, the multi-depot VRP (MDVRP) is a more realistic and practical formulation of the problem. Consider a distribution company with several depots. The number and locations of the depots are predetermined. Each depot is assumed to be large enough to store all the products ordered by the customers. A fleet of vehicles with limited capacity is used to transport the products from depots to customers. Each vehicle starts and finishes at the same depot. The location and demand of each customer is also known in advance. Each customer is visited by a vehicle exactly once. This practical distribution problem can be regarded as the MDVRP, in which there are three interrelated decisions. The decision makers first need to cluster a set of customers to be served by the same depot (i.e., the customer assignment problem). They then have to assign customers of the same depot to several routes (i.e., the customer routing problem) so that the vehicle capacity constraint is not violated. At last, the decision on delivery sequence of each route (i.e., the vehicle sequencing problem) is made. Because there is a single depot in the VRP, only the customer routing and vehicle sequencing problems are considered. The number of research projects on the MDVRP is fewer when compared with the VRP. This may be due to the fact that the MDVRP is more challenging and sophisticated. According to the literature [1–9], there are two common points among the approaches adopted by the previous researchers. First, due to the complexity of the problem, solving the MDVRP to optimality is extremely time-consuming. To tackle the problem in an efficient manner, all previous researchers preferred heuristic methods to exact algorithms. Sumichrast and Markham [1] developed a heuristic approach based on the Clarke and Wright saving method to solve the MDVRP. Renaud et al. [2] adopted a heuristic method to deal with the MDVRP. The method first constructed an initial feasible solution, followed by the improvement process using the tabu search. Salhi and Sari [3] proposed a heuristic method with three levels to solve the MDVRP. The first level was the construction of an initial feasible solution. The second and the third levels were to improve the routes in each depot, that is, intra-depot and the routes in all depots, that is, inter-depot, respectively. Hadjiconstantinou and Baldacci [4] used a heuristic method to solve the multi-depot period VRP, in which the customers are served in a period of time rather than one day. Su [5] proposed a dynamic vehicle control and scheduling system to solve the MDVRP. All the control decisions were made according to the real time status of the system, such as the location, quantity, and due date of the demand. Giosa et al. [6] designed and compared six heuristics for the multi-depot VRP with time windows (MDVRPTW), in which the vehicles must arrive at the customers before the latest arrival time, while arriving before the earliest arrival time results in waiting. Wu et al. [7] studied the

72

Optimization of Physical Distribution Problem in Logistics Management

multi-depot location-routing problem (MDLRP), which is an extension of the MDVRP. The MDLRP was decomposed into the location-allocation problem and the VRP, and then they were solved sequentially and iteratively using the simulated annealing. The major difference between the MDLRP and the MDVRP is that the former one also determined the number and locations of depots. Similar to Wu et al. [7], Wasner and Zäpfel [8] also studied the MDLRP for planning of parcel service. A heuristic method based on the local search with a series of feedback loops was developed to solve the problem separately. Nagy and Salhi [9] presented a number of heuristic methods to solve the single-depot VRP with pickups and deliveries (VRPPD), in which the customers may both receive and send products. The methods can be modified to tackle the multi-depot VRPPD (MDVRPPD). The second common point is that the MDVRP was decomposed into three individual problems (i.e., the customer assignment problem, the customer routing problem, and the vehicle sequencing problem), each of which was then solved sequentially and iteratively. There are two assumptions which are usually made for the customer assignment problem. First, it is assumed that each depot is large enough to store all the products ordered by the customers. Each customer can, therefore, be assigned to a single depot. This assumption can simplify the problem definitely. Second, the setup time of each depot is assumed to be zero. In the real-world situations, however, the setup times required by various depots are not the same. It is due to the fact that the productivity or efficiency of various depots is different. Because the previous researchers have solved the customer assignment problem based on these two unrealistic assumptions, we argue that the problem has not been studied thoroughly. In this paper, the customer assignment problem, in which both the depot capacity constraint and the setup time are considered, is formulated as a minimax-type integer linear programming model. The model is different from the well-known transportation model or assignment model, and there exist no special algorithms to solve it. We will show that the minimax-type mathematical model cannot be solved easily by a general exact algorithm, such as the branch-and-bound method. If it can be solved, it needs a lot of time to obtain an optimal solution. After that, a genetic algorithm is developed to find a near-optimal solution of the problem. Finally, we will demonstrate a numerical example to illustrate the minimax-type mathematical model, and the efficiency and effectiveness of the proposed genetic algorithm.

2.

A MATHEMATICAL MODEL A distribution company may have several depots with limited capacity. The number and locations

of the depots are predetermined. The company receives customer orders for many distinct products every day. The location and demand of each customer is also known in advance. The schedulers have to determine which customer is assigned to which depot, and also the quantity of the product to be assigned

73

Ho, Ji and Albores

to the depot so that the best depot performance in terms of the cycle time can be achieved. Products ordered by a particular customer may be stored in one depot only, or in more than one depot, depending on the volume of the customer order and the capacity of the depots. For the MDVRP, the objective is generally to minimize the summation of the delivery time spent in each depot [1–9]. However, the delivering operations start at the same period in every depot, each of which takes a different time for serving its cluster of customers. Vehicles in some depots may finish the operations sooner while another may take longer time for completion. The longest one among the depots is the dominating time needed in the delivery of the products to all customers. The objective of the customer assignment problem should, therefore, be to minimize the processing time for the depot with the largest delivery time, including the depot setup time, that is, the cycle time. Suppose that there are m depots in a logistics distribution company, and n customers are going to be served by the depots. Depot i has a maximum of ci capacity available for storing the products while customer j has a volume requirement of dj. It requires tij time per unit to deliver if customer j is assigned to depot i. If these m depots are assigned to serve customer j, the times required by these m depots to deliver one unit of the product are not identical. The amount of tij is dependent on the distance between depot i and customer j. By introducing si to denote the setup time for depot i and yi to determine whether depot i is assigned with some demand, the above customer assignment problem with the objective of minimizing the cycle time can be formulated as n ⎛ ⎞ Minimize z = max ⎜⎜ s i y i + ∑ t ij xij ⎟⎟ for i = 1, 2, …, m j =1 ⎝ ⎠

(1)

subject to m

∑x j =1

ij

≤ ci y i

for ∀i

(2)

ij

= dj

for ∀j

(3)

m

∑x i =1

All xij ≥ 0 and is a set of integers; All yi = 0 or 1.

(M1)

The decision variables xij are introduced to indicate the number of products ordered by customer j to be assigned to depot i. The objective function (1) is to minimize the processing time for the depot with the largest delivery time, including the depot setup time, that is, the cycle time. As usual, the cycle time is defined as the maximum delivery time among all the depots. Constraint set (2) is due to the limited available capacity, which means that the amount of products assigned to each depot must be within the fixed capacity. Constraint set (3) is to guarantee that the products ordered by all customers will be delivered. Formulation M1 is a minimax-type integer linear programming model. The complexity of a minimax-type problem was discussed by Yu and Kouvelis [10]. Yu and Kouvelis [10] proved that the

74

Optimization of Physical Distribution Problem in Logistics Management

minimax assignment problem is NP-hard. The data structure in M1 is, however, similar to but not exactly the same as the minimax assignment problem. Actually, M1 is more difficult than the minimax assignment problem in two aspects. First, the decision variables xij in the minimax assignment problem must be either 0 or 1, and the number of jobs must be equal to the number of tasks (i.e., n = m). The decision variables, however, can be any non-negative integer value rather than just 0 or 1, and n



m in

M1. Second, the objective of the minimax assignment problem is to minimize the maximum xij only, whereas the objective of M1 is to minimize the maximum summation of tijxij and the component siyi. M1 is, therefore, a typical general integer linear programming model, and it belongs to the NP-complete problem because Papadimitriou concluded that a general integer linear programming model is NP-complete [11]. To deal with the problem efficiently, a genetic algorithm (GA) is developed in this paper to yield a near-optimal integer solution.

3.

A GENETIC ALGORITHM Genetic Algorithm (GA), developed by John Holland in the 1960s, is a stochastic optimization

technique. Similar to other meta-heuristics like simulated annealing (SA) and tabu search (TS), GA can avoid getting trapped in a local optimum by the aid of mutation operation. The basic idea of GA is to maintain a population of candidate solutions that evolves under selective pressure. Hence, it can be viewed as a class of local search based on a solution-generation mechanism operating on attributes of a set of solutions rather than attributes of a single solution by the move-generation mechanism of the local search methods, like SA and TS [12]. In the recent years, GA has been applied successfully to a wide variety of hard optimization problems, such as the traveling salesman problem and the quadratic assignment problem [13–14]. The success is mainly due to its simplicity, easy operation, and great flexibility. These are the major reasons why GA is selected as an optimization tool. GA starts with an initial set of random solutions, called a population. Each solution in the population is called a chromosome, which represents a point in the search space. The chromosomes evolve through successive iterations, called generations. During each generation, the chromosomes are evaluated using some measures of fitness. The fitter the chromosomes, the higher the probabilities of being selected to perform the genetic operations: crossover and mutation. In the crossover phase, the GA attempts to exchange portions of two parents (i.e., two chromosomes in the population) to generate an offspring. The crossover operation speeds up the process to reach better solutions. In the mutation phase, the mutation operation maintains the diversity in the population to avoid being trapped in a local optimum. A new generation is formed by selecting some parents and some offspring according to their fitness values, and by rejecting others to keep the population size constant. After the predetermined number of generations is performed, the algorithm converges to the best chromosome, which hopefully represents the optimal solution or may be a near-optimal solution of the problem.

75

Ho, Ji and Albores

In the GA developed for the customer assignment problem, a chromosome is represented by a matrix, which itself is also a solution of the problem. So, there is no difference between chromosomes (genotypes) and phenotypes in this GA.

3.1.

Initialization The following initialization procedure is used to generate a feasible chromosome for the customer

assignment problem represented by model M1. Step 1:

Select a random number k from set π, π = {1, 2, ..., mn}.

Step 2:

Calculate the corresponding row and column numbers i and j by: i = ⎣(k – 1) / n + 1⎦ and j = (k – 1) mod n + 1.

Step 3:

Assign the available amount of units to xij: xij = min (ci, dj).

Step 4:

Update ci and dj: ci = ci – xij; dj = dj – xij; and delete k from π.

Step 5:

Repeat Step 1 to Step 4 until π becomes empty.

The above initialization procedure should be repeated psize (population size) times to generate psize chromosomes for the problem. All chromosomes generated from the above steps are in the form of matrix. Because the chromosomes satisfy the constraint sets (2) and (3), they are feasible, but may not be optimal to the customer assignment problem.

3.2.

Evaluation and selection In a GA, both parent and offspring chromosomes must be evaluated by some measures of fitness.

In the customer assignment problem, the objective function (1) in M1 is used to measure the fitness. Let eval(Xh) be the fitness function for chromosome Xh (h = 1, 2, …, psize) in the problem, then the fitness function for the problem is:



eval(Xh) = max ⎜⎜ si y i +



n

∑t j =1

ij

⎞ xijh ⎟⎟ ⎠

for i = 1, 2, …, m. (4)

A selection procedure is required to choose some chromosomes to undergo genetic operations. The probability of being selected is directly proportional to the chromosome’s fitness. The selection procedure is listed as follows: Step 1:

Calculate the total fitness of the population: psize

F=

∑ eval ( X h =1

Step 2:

h

).

Calculate the selection probability ph for each chromosome Xh:

76

Optimization of Physical Distribution Problem in Logistics Management

ph = Step 3:

F − eval ( X h ) , F × ( psize − 1)

h = 1, 2, ..., psize.

Calculate the cumulative probability qh for each chromosome Xh: h

qh =

∑p j =1

3.3.

j

,

h = 1, 2, …, psize.

Step 4:

Generate a random number r in the range (0, 1].

Step 5:

If qh-1 < r ≤ qh, then chromosome Xh is selected.

Genetic operations The genetic search progress is obtained by two essential genetic operations: the crossover

operator exploits a better solution while the mutation operator explores a wider search space. Because the chromosomes are represented in the form of matrix rather than in binary or path form, the general genetic operations, such as the order crossover and the inversion mutation, cannot be applied. Tailor-made crossover and mutation operators are developed instead. The number of chromosomes selected to perform crossover and mutation operators depends on the crossover rate and mutation rate, which are set by the GA user. Let crossno and mut denote the number of chromosomes selected to undergo the crossover and mutation, respectively, then crossno = round(cr × psize) and mut = round(mr × psize), where cr is the crossover rate, and mr is the mutation rate. Because a pair of chromosomes is required to undergo the crossover operation, the number of pairs of chromosomes, denoted as cross, is an integer: cross = crossno / 2 if crossno is even, and cross =

(crossno − 1) / 2 if crossno is odd. There is no solid instruction in selecting the crossover rate and mutation rate. A higher crossover rate allows exploration of more solution space and reduces the chances of setting for a false optimum; but if this rate is too high, it results in the wastage of a lot of computation time in exploring unpromising regions of the solution space. On the other hand, if the mutation rate is too low, many genes that would have been useful are never tried out. But if it is too high, there will be much random perturbation, the offspring will lose their resemblance to their parents, and the algorithm will lose the ability to learn from the history of search [14]. Crossover operation: Step 1:

Implement the selection procedure in Section 3.2 (Step 1 to Step 5) twice to choose a pair of chromosomes, X1 = ( xij1 ) and X2 = ( xij2 ), from the population to perform the crossover operation.

Step 2:

Create two m × n temporary matrices, D = (dij) and R = (rij), as follows: dij = ⎣( xij1 + xij2 ) / 2⎦ and rij = ( xij1 + xij2 ) mod 2.

77

Ho, Ji and Albores

Divide matrix R into two matrices R1 = ( rij1 ) and R2 = ( rij2 ) so that

Step 3:

R = R1 + R2 and n

∑ rij1 = j =1 m

∑ rij1 = i =1

n

∑ rij2 = j =1 m

∑ rij2 = i =1

1 n ∑ rij 2 j =1

for i = 1, 2, ..., m;

1 m ∑ rij 2 i =1

for j = 1, 2, ..., n.

Produce two offspring, X1' and X2', as follows:

Step 4:

X1' = D + R1 and X2' = D + R2. The offspring generated from the crossover operation are still feasible to M1. Note that

xij1 + xij2 = 2d ij + rij , where rij = 0 if ( xij1 + xij2 ) is even, and rij = 1 if ( xij1 + xij2 ) is odd. On the other hand, for any column j,



x1 = ∑i =1 d ij + i =1 ij m

m



m i =1

xij1 = ∑i =1 xij2 , therefore 2∑i =1 xij1 = 2∑i =1 d ij + ∑i =1 rij , that is,

m

i =1

xij1 and



m

i =1

m

m

1 m m r = X 1 . It can, therefore, be concluded that the offspring (X1' or X2') ∑ ∑ ij i =1 2 i =1

is feasible to M1. Besides, a



m

m



m

r (j = 1, 2, …, n) should be an even number because both

i =1 ij

d ij are integers, then

1 m ∑ rij has to be an integer. 2 i =1

Mutation operation: Step 1:

Implement the selection procedure to select a chromosome.

Step 2:

Extract a submatrix Y from the parent matrix by randomly selecting m rows and n columns.

Step 3:

Reallocate the submatrix Y. Use the initialization procedure in Section 3.1 (Step 1 to Step 5) to assign new values to the submatrix so that all constraints are satisfied.

Step 4:

Create an offspring by replacing the appropriate elements of the parent matrix with the new elements from the reallocated submatrix Y.

3.4.

Algorithm The procedure of the GA for the customer assignment problem is listed as follows: Step 1:

Set the GA parameters, including the population size (psize), the number of iterations (itno), the crossover rate (cr), and the mutation rate (mr).

Step 2:

Generate psize initial chromosomes using the initialization procedure discussed in Section 3.1.

Step 3:

Evaluate the fitness value eval(Xh) for all chromosomes in the population addressed

78

Optimization of Physical Distribution Problem in Logistics Management

in Section 3.2. Step 4:

Follow the selection procedure in Section 3.2 to select chromosomes to perform the crossover operation in Section 3.3.

Step 5:

Follow the selection procedure to select chromosomes to perform the mutation operation in Section 3.3.

Step 6:

Compare all offspring, including the chromosomes generated from both crossover and mutation operations, with the chromosomes in the population by the fitness values obtained from Eq. (4). Retain the best psize chromosomes in the population.

Step 7:

Determine the best chromosome, that is: min{eval(Xh), h = 1, 2, …, psize} for each generation. Repeat Step 4 to Step 7 until itno iterations are performed.

4.

NUMERICAL EXAMPLE The scenario in Table 1 is used to illustrate how the GA works. The minimax-type integer linear

programming model for the customer assignment problem is formulated as

⎧ ⎛ 3000 y1 + 2 x11 + x12 + 4 x13 + 3x14 + 3x15 + 7 x16 + 6 x17 ; ⎞⎫ ⎜ ⎟⎪ ⎪ Minimize ⎨max⎜ 2500 y 2 + 4 x 21 + 2 x 22 + 4 x 23 + 3x 24 + x 25 + 4 x 26 + 3 x 27 ; ⎟⎬ ⎜ 2000 y + 8 x + 6 x + 5 x + 4 x + 3x + x + x ; ⎟⎪ ⎪ 3 31 32 33 34 35 36 37 ⎝ ⎠⎭ ⎩ subject to x11 + x12 + x13 + x14 + x15 + x16 + x17 ≤ 25,000y1 x21 + x22 + x23 + x24 + x25 + x26 + x27 ≤ 20,000y2 x31 + x32 + x33 + x34 + x35 + x36 + x37 ≤ 18,000y3 x11 + x21 + x31 = 12,000 x12 + x22 + x32 = 9,000 x13 + x23 + x33 = 10,000 x14 + x24 + x34 = 8,000 x15 + x25 + x35 = 6,000 x16 + x26 + x36 = 11,000 x17 + x27 + x37 = 7,000 All xij ≥ 0 and is a set of integers; All yi = 0 or 1.

(M2)

In the GA, the parameters are set as follows: psize = 25, itno = 300, cr = 0.4, and mr = 0.3. So, cross = 5, and mut = 8. Here, a relatively high mutation rate is applied so that more new genes can be introduced into the population. Fig. 1 shows the best cycle time obtained at each iteration. After 300 iterations, the best solution is shown in Table 2 with the cycle time of 49,000.

79

Ho, Ji and Albores

From Fig. 1, it is noticed that the objective value drops sharply at the first 25 iterations. Because the population size is small, only 25, the GA can only produce some not-so-good chromosomes at beginning. Later, the GA generates good offspring quickly from those highly-fit parents. This phenomenon is called rapid convergence. When the objective reaches the best solution (i.e., 49,000), the improvement rate decreases quickly. The best solution in Table 2 is not optimal. The optimal solution is shown in Table 3, with the optimal cycle time of 48,000. Although the GA cannot generate the optimal solution, the best solution obtained has only 2% error, which is acceptable in the real-world situation.

Table 1 Numerical data of the customer assignment problem Depot i

Customer j

Capacity ci

Setup time si

1

2

3

4

5

6

7

1

2

1

4

3

3

7

6

25,000

3,000

2

4

2

4

3

1

4

3

20,000

2,500

3

8

6

5

4

3

1

1

18,000

2,000

Number of products ordered dj

12,000 9,000 10,000 8,000

6,000 11,000 7,000

Cycle time 57000 56000 55000 54000 53000 52000 51000 50000 49000

0

50

100

150

200

250 300 Iteration number

Fig. 1. The best cycle time obtained at each iteration.

80

Optimization of Physical Distribution Problem in Logistics Management

Table 2 The best solution after 300 iterations x11 = 12,000

x12 = 9,000

x13 = 0

x14 = 4,000

x15 = 0

x16 = 0

x17 = 0

x21 = 0

x22 = 0

x23 = 4,500

x24 = 4,000

x25 = 6,000

x26 = 0

x27 = 3,500

x31 = 0

x32 = 0

x33 = 5,500

x34 = 0

x35 = 0

x36 = 11,000

x37 = 3,500

Table 3 The optimal solution x11 = 12,000

x12 = 9,000

x13 = 3,000

x14 = 0

x15 = 0

x16 = 0

x17 = 0

x21 = 0

x22 = 0

x23 = 500

x24 = 8,000

x25 = 6,000

x26 = 0

x27 = 4,500

x31 = 0

x32 = 0

x33 = 6,500

x34 = 0

x35 = 0

x36 = 11,000

x37 = 2,500

5.

CONCLUSIONS This paper studied the customer assignment problem of the MDVRP only because some

unrealistic assumptions were made on it in all the approaches proposed by previous researchers. To refine the approaches, a minimax-type integer linear programming model was formulated for the problem with the objective of minimizing the processing time of the depot with the maximum delivery and setup times, that is, the cycle time. Because the model is NP-complete, a genetic algorithm was developed to yield a near-optimal solution. A numerical example was presented to illustrate both the mathematical model and the efficiency of the genetic algorithm. Although the genetic algorithm cannot generate the optimal solution of the example, the error of the best solution obtained is small and acceptable.

REFERENCES [1]

Sumichrast RT, Markham IS. A heuristic and lower bound for a multi-depot routing problem. Computers & Operations Research 1995; 22: 1047-1056.

[2]

Renaud J, Laporte G, Boctor FF. A tabu search heuristic for the multi-depot vehicle routing problem. Computers & Operations Research 1996; 23: 229-235.

[3]

Salhi S, Sari M. A multi-level composite heuristic for the multi-depot vehicle fleet mix problem. European Journal of Operational Research 1997; 103: 95-112.

[4]

Hadjiconstantinou E, Baldacci R. A multi-depot period vehicle routing problem arising in the utilities sector. Journal of the Operational Research Society 1998; 49: 1239-1248.

[5]

Su CT. Dynamic vehicle control and scheduling of a multi-depot physical distribution system. Integrated Manufacturing Systems 1999; 10: 56-65.

[6]

Giosa ID, Tansini IL, Viera IO. New assignment algorithms for the multi-depot vehicle routing

81

Ho, Ji and Albores

problem. Journal of the Operational Research Society 2002; 53: 977-984. [7]

Wu TH, Low C, Bai JW. Heuristic solutions to multi-depot location-routing problem. Computers & Operations Research 2002; 29: 1393-1415.

[8]

Wasner M, Zäpfel G. An integrated multi-depot hub-location vehicle routing model for network planning of parcel service. International Journal of Production Economics 2004; 90: 403-419.

[9]

Nagy G, Salhi S. Heuristic algorithms for the single and multiple depot vehicle routing problems with pickups and deliveries. European Journal of Operational Research 2005; 162: 126-141.

[10] Yu G, Kouvelis P. Complexity results for a class of min-max problems with robust optimization applications. In Paradalos, P. M., Editor, Complexity in Numerical Optimization, World Scientific Publishing, Singapore 1993; 501-511. [11] Papadimitriou CH. On the complexity of integer programming. Journal of the Association for Computing Machinery 1981; 28: 765-768. [12] Osman IH, Kelly JP. Meta-heuristics: theory & applications. Boston: Kluwer Academic Publishers; 1996. [13] Goldberg DE. Genetic algorithms in search, optimization and machine learning. New York: Addison-Wesley; 1989. [14] Gen M, Cheng R. Genetic algorithms and engineering design. New York: Wiley; 1997.

82

Industrial Engineering Research, Vol. 4 (2) 83-92 (2007) © 2007 Institute of Industrial Engineers (Hong Kong)

ISSN 1027-2208

Parameterized Finite Element Method for Analysis of Heterogeneous Solids Y. Q. Guo1,*, C. Y. Tang1 and B. Gao2 1

Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, P. R. China 2 Department of Prosthodontics, College of Stomatology, The Fourth Military Medical University, Xi'an, 710032, P. R. China *Corresponding author, E-mail: [email protected]

ABSTRACT Titanium/hydroxyapatite (Ti/HAp) biocomposites which have been used for prostheses and implants in orthopedics are heterogeneous. A mathematical description of these materials is very useful to develop some new biomedical devices. In this paper, a parameterized finite element method, being capable of describing spatially varying material properties, has been developed based on the commercial finite element code ABAQUS. By using user-defined material subroutines, the effects of heterogeneity on stress concentration and load-bearing capability have also been analyzed. The computational results indicate that the method is capable of producing useful information to predict the stress distribution and the failure progress of heterogeneous solids. Keywords: Parameterized Finite Element Method; Heterogeneous Material; Progressive Failure

1. INTRODUCTION Titanium/Hydroxyapatite (Ti/HAp) biocomposites have been used for prostheses and implants in orthopedics and dental care, which can be produced using laser rapid freeforming technology (Wang and Shaw, 2005). These materials possess the mechanical benefits of metallic strength with the bioactivity of HAp. In most Ti/HAp biocomposites, the HAp particles distribute randomly in the Ti matrix. In others words, for multiphase composites, their constituents are disordered at microscopic

83

Guo, Tang and Gao

level. Conventional approaches to the analysis of heterogeneous materials include the use of methods of homogenization, generally based on the assumption that the microstructure is periodic (Yu and Tang, 2007). However, the assumption of periodicity in the microstructure is rarely valid for most heterogeneous materials. Moreover, the standard averaging approaches that give effective or apparent properties of the heterogeneous material cannot describe local micromechanical effects that may be the critical factors in determining the damage evolution and the fatigue life of structural components.

Recently, numerical simulation (Schlangen and Mier, 1992; Blair and Cook, 1998) has become one of the most important approaches to take into account of heterogeneities and provide an increased insight into internal features of the response of heterogeneous solids subjected to applied load. In the past few decades, many methods have been proposed and much progress has been made for modeling heterogeneous materials. In general, these methods may be classified into two groups: one is finite element method and boundary element method which are based on self-consistent model; the other is lattice finite element method which is based on statistical analysis. These two groups of methods have been applied to model the heterogeneous material successfully. Schlangen and Mier (1992) modeled the brittle failure process of concrete-like materials by using a simplified lattice model. Frantziskonis (1997) analyzed the influence of heterogeneity on the displacement field near the structure and obtained an analytical result. Using statistical theory, Blair and Cook (1998) investigated the nonlinear mechanical behaviors and the microscopic heterogeneities of stress field caused by geometry and size effects. Tang and Kaiser (1998) simulated the progressive failure of rock using heterogeneous finite element model. Using the R-T2D code the fracture mechanism and process of a ceramic fixed partial denture framework was simulated under static loading (Kou and Kou, 2007). Tang and his team have also simulated the deformation behavior of aluminum matrix composites in laser forming (Liu and Tang, 2005). To simulate the elastic modulus of dental composites, a novel method has been developed by a CAD-based finite element modeling technique (Chan and Tang, 2006). Tang and Zhang (2006) have developed a thermo-elasto-plastic graded finite element method for modeling functionally graded materials.

For heterogeneous material, the mechanical properties at a material point such as Young’s modulus

E (r ) may be defined as a random field function of the material spatial coordinate r . Therefore, a parameterized finite element method is introduced to describe the spatial varying material properties in this paper. A simple linear elastic constitutive relation and the maximum principal stress failure criterion have been applied to describe the mechanical properties of the microscopic homogeneous phases. For simulating the progressive failure, the element elimination method has been used. Material points that satisfy a pre-defined failure criterion will be removed from the finite element model. This

84

Parameterized Finite Element Method for Analysis of Heterogeneous Solids

developed technique can be also extended for studying the fracture analysis of functionally graded materials by applying an appropriate field function.

2. THE PARAMETERIZED FINITE ELEMENT METHOD Suppose a heterogeneous solid is represented by a multiphase model, in which the elastic property of each phase is governed by a parameter ξ . In this investigation, the heterogeneity is due to random spatial distribution of material phases with same size but different mechanical properties. Therefore, the spatial coordinate r of a microscopic phase is taken as the parameter, such that

E (ξ ) = E (r )

(1)

The microscopic ith phase is represented by the ith finite element. If there are N phases in the representative volume element (RVE), the corresponding N finite elements with same size will be meshed. We can choose element number (i) of the ith phase’s spatial coordinate r i as the seed of the random function which is used for generating a random number array { w1 , w2 L wn } with the variables w j ∈ (0,1) . The material properties Ω(i ) following a normal distribution, which probability density function is f ( x) =

⎡ 1 ⎛ ( x − M ) 2 ⎞⎤ 1 ⎟⎟⎥ , can be generated from: exp ⎢− ⎜⎜ 2 S 2π ⎠⎦ ⎣ 2⎝ S n n (∑ wj − ) 2 Ω(i ) = M + S j =1 n / 12

(2)

where n is the number of array { w1 , w2 L wn }. It is only when n is large enough that the distribution of the variable Ω(i ) generated by equation (2) is normal. According to the reference (Xu, 1995), sufficient accuracy is obtained if n is fixed to 12 in equation (2). And M and S are the mean value and the standard deviation of the variable Ω(i ) , respectively. As illustrated by Fig. 1, Young’s modulus of the phase at spatial coordinate r i is given by

E (r) r =r = Ωi

(3)

i

Using this method, a 3D heterogeneous solid with Young’s modulus varying with spatial coordinate

r i as shown in Fig. 2 can be modeled. The brighter the gray scale, the higher is the value of Young’s modulus of the element. For the mean value and the standard deviation of Young’s modulus being taken as 6GPa and 0.6GPa respectively, the volume fraction distribution of Young’s modulus E (r ) generated by this method is shown in Fig. 3. It is noted that the distribution is normal.

85

Guo, Tang and Gao

Choose the element number (i) at spatial coordinate r i as a seed for generating an array {w1, w2…wn} using a random function

Random variable Ω(i ) following a normal distribution can be generated by equation (2)

i=i+1

E (r) r =r = Ωi i

Fig. 1 Assignment of Young’s modulus E (r ) r = r for a heterogeneous solid i

3

ri 2

1

Fig. 2 Three dimensional spatial random distribution of Young’s modulus E (r )

16

Young's modulus 14

Mean value: 6GPa Stadard deviation:0.6GPa

Volume fraction(%)

12 10 8 6 4 2 0 4.5

5.0

5.5

6.0

6.5

7.0

7.5

Young's modulus (GPa)

Fig. 3 Statistical distribution of Young’s modulus

86

Parameterized Finite Element Method for Analysis of Heterogeneous Solids

3. CONSTITUTIVE EQUATIONS OF EACH PHASE A lattice method is used to mesh the RVE (Schlangen and Mier, 1992). That is, the RVE is discretized into elements with same size and shape. For each element, the material properties are homogeneous. However, the material properties at different spatial coordinates are distributed randomly throughout the RVE following a normal distribution.

Consider a crisp heterogeneous solid, the constitutive equation of each phase may be described by Hooke’s law. When the maximum principal stress of a phase is larger than the corresponding ultimate strength, the phase will fail and has no resistance to deformation. The constitutive equation and the failure criterion can be expressed as follows:

σ (r ) = D(r )ε (r ), σ I (r ) < σ b (r )⎫ ⎬ σ (r ) = 0, σ I (r ) ≥ σ b (r ) ⎭

(4)

where, σ (r ), ε (r ) and D (r ) are the stress tensor, the strain tensor and the elastic constitutive matrix at the spatial coordinate r , respectively. Moreover, σ I (r ) is the maximum principal stress, σ b (r ) represents the corresponding ultimate strength, and D (r ) may be defined as: 2ν (r) 0 0 0 ⎤ ⎡2[1−ν (r)] 2ν (r) ⎢ 2ν (r) 2[1−ν (r)] 2ν (r) 0 0 0 ⎥⎥ ⎢ ⎢ 2ν (r) 2ν (r) 2[1−ν (r)] 0 0 0 ⎥ E(r) D(r) = ⎢ ⎥ 0 0 1− 2ν (r) 0 0 ⎥ 2[1+ν (r)][1− 2ν (r)] ⎢ 0 ⎢ 0 0 0 0 1− 2ν (r) 0 ⎥ ⎢ ⎥ 0 0 0 0 1− 2ν (r)⎦⎥ ⎣⎢ 0

(5)

where E (r ) and ν (r ) are Young’s modulus and Poisson ratio at the spatial coordinate (r ) , respectively.

Equations (4) and (5) have been implemented in the VUMAT subroutine of ABAQUS, and in the subroutine, material points that satisfy the failure criterion will be eliminated from the model. ABAQUS/Explicit will hence pass zero stress and strain increments for all deleted material points. Once a material point has been flagged as deleted, it cannot be reactivated.

87

Guo, Tang and Gao

4. COMPUTATIONAL RESULTS AND DISCUSSIONS 4.1 THE INFLUENCE OF HETEROGENEITY ON STRESS CONCENTRATION It is known that heterogeneous distribution in stress field and deformation field is usually resulted from the heterogeneous distribution of material properties. A solid is often weakened by localized stress or so called stress concentration. A heterogeneous structure may fail when the local maximal principal stress exceeds the material strength. The stress concentration factor (SCF), which is the ratio of the maximum local stress σ max to the average stress σ 0 ( σ 0 =

1 σ dA ), has been calculated A ∫A

using the parameterized finite element method. In the calculation, Poisson ratio ν (r ) is fixed to 0.17, and the mean value of Young’s modulus E (r ) is 6.0GPa. The standard deviation of E (r ) is ranged from 0.30GPa to 1.8GPa. The coefficient of variation (COV) defined as the ratio of the standard deviation to the mean value is introduced to describe the degree of heterogeneity.

The RVE shown in Fig. 2 with the dimensions 1mm×1mm×1mm is discretized into 10×10×10 C3D8R elements. Uniaxial tension along the y-axis of the heterogeneous solid is simulated by the proposed method. Fig. 4 shows the normalized stress σ 22 / σ 0 for the cases when COV are 0.05 and 0.30. It is apparent that the distribution of stress field is random, and the higher the COV, the larger is the dispersion of the stress field.

σ 22σ // σσ 0 22

(a)

0

σ 22 / σ 0

(b)

Fig. 4 The influence of heterogeneity on the normalized stress field (a) Lower heterogeneity COV=0.05; (b) Higher heterogeneity COV=0.30;

88

Parameterized Finite Element Method for Analysis of Heterogeneous Solids

The computational results of SCF are listed in Table 1. The relation between SCF and COV is plotted in Fig. 5. It reveals that SCF increases linearly with COV. The relation between SCF and COV can be approximated by the function: SCF = 2.19 × COV + 1 .

Table 1 The influence of heterogeneity on the stress concentration factor Case COV Mean value Standard deviation Stress concentration factor 1 0.05 6.0 0.30 1.119 2 0.10 6.0 0.60 1.232 3 0.15 6.0 0.90 1.337 4 0.20 6.0 1.20 1.447 5 0.25 6.0 1.50 1.544 6 0.30 6.0 1.80 1.651 1.7

The parameterized FEM Function: SCF=2.19*COV+1

Stress concentration factor

1.6

1.5

1.4

1.3

1.2

1.1

1.0 0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

COV of Young's modulus

Fig. 5 SCF versus COV 4.2 THE INFLUENCE OF HETEROGENEITY ON LOAD-BEARING CAPABILITY Damage evolution in brittle materials is a complex process in which the heterogeneity plays an important role. The heterogeneity may imply that the exact failure mode can be highly dependent upon the spatial distribution of initial imperfections. In this section, Poisson ratio ν (r ) and the ultimate strength σ b (r ) are the fixed values of 0.17 and 3.0MPa, respectively. The mean values of Young’s modulus E (r ) is 6.0GPa, and COV of

E (r ) varies from 0.05 to 0.30.

89

Guo, Tang and Gao

3.0

macroscopic stress Σ22 (MPa)

2.5

2.0

1.5

Cov=0.05 Cov=0.10 Cov=0.15

1.0

Cov=0.20 Cov=0.25 Cov=0.30

0.5

0.0 0.0000

0.0001

0.0002

0.0003

0.0004

0.0005

Macrocopic strain E22

Fig. 6 Stress-strain curves of different heterogeneity materials Under uniaxial tension, the macroscopic stress-strain relation is illustrated in Fig. 6. The influence of heterogeneity on elastic deformation of heterogeneous structure is found to be negligible. When the value of COV is less than 0.20, the stress-strain curves are linear before the onset of failure. However, when COV is greater than 0.20, there are some fluctuations in the curves of stress-strain after some elements exceed their stress thresholds, then the major failure will occur. The relation between the apparent failure stress and COV of E (r ) is depicted in Fig. 7. It reveals that the load-bearing capability of heterogeneous material will decrease with the increase in heterogeneity. These behaviors are commonly found in brittle materials (Tang and Kaiser, 1998).

Apparent failure stress (MPa)

2.8

2.6

2.4

2.2

2.0

1.8 0.05

0.10

0.15

0.20

0.25

0.30

COV of Young's modulus

Fig. 7 Apparent failure stress versus COV of E (r )

90

Parameterized Finite Element Method for Analysis of Heterogeneous Solids

5. CONCLUSION A theoretical framework for simulating heterogeneous material has been described. And the parameterized finite element method has been developed. The method is implemented and validated in the commercial software ABAQUS. Using the proposed method, the uniaxial tensile property of a heterogeneous brittle solid is simulated. The simulation results agree with the common mechanical behaviors of brittle solids.

ACKNOWLEDGEMENT The authors would like to thank the support from the Research Grants Council of Hong Kong (PolyU 5276/06E).

REFERENCES Wang, J. W., Shaw, L. L. 2005, Rheological and extrusion behavior of dental porcelain slurries for rapid prototyping applications, Materials Science and Engineering A, vol.397, pp. 314–321. Yu, W. B., Tang, T. 2007, Variational asymptotic method for unit cell homogenization of periodically heterogeneous materials, International Journal of Solids and Structures, vol.44 pp.3738-3755. Schlangen, E. and Mier, J. G. M. 1992, Experimental and numerical analysis of micromechanisms of fracture of cement-based composites,

Cement and Concrete Composites, vol. 14, no.

2, pp.105-118. Blair, S. C. and Cook, N. G. W. 1998, Analysis of compressive fracture in rock using statistical techniques: Part I. A non-linear rule-based model, International Journal of Rock Mechanics and Mining Sciences, vol. 35, no. 7, pp.837-848. Frantziskonis, G., Renaudin, P., Breysse, D. 1997, Heterogeneous solids: Part I: Analytical and numerical 1-D results on boundary effects, Eur J Mech, A/Solids, vol. 16, no. 3, pp.409-423. Tang, C. A. and Kaiser, P. K. 1998, Numerical simulation of cumulative damage and seismic energy release during brittle rock failure--Part I: fundamentals, International Journal of Rock Mechanics and Mining Sciences, vol. 35, no. 2, pp.113-121. Kou, W., Kou, S. Q., Liu, H. Y. and Sjogren, G. 2007, Numerical modeling of the fracture process in a three-unit all-ceramic fixed partial denture, Dental Materials, doi:10.1016/j.dental.2006.06.039. Liu, F. R., Chan, K. C. and Tang, C. Y. 2005, Theoretical analysis of deformation behavior of Aluminium matrix composites in laser forming. Material Science and Engineering A, vol.396, pp. 172-180.

91

Guo, Tang and Gao

Chan, Y. P., Tang, C. Y. and Chow, C. L. 2006, CAD-based finite element analysis of dental composites using face-centred cubic model, Industrial Engineering Research, vol. 3, no.2, pp.101-110. Tang, C. Y., Zhang, G. and Tsui, C. P. 2006, Graded Finite Elements for mechanistic analysis of heterogeneous structures, Industrial Engineering Research, vol. 3, no.1, pp.62-70. Xu, S. L. 1995, FORTRAN Common Algorithm and Program Library (The second Edition), Tsinghua University Press, Beijing.

92

Industrial Engineering Research, Vol. 4 (2) 93-102 (2007) © 2007 Institute of Industrial Engineers (Hong Kong)

ISSN 1027-2208

Modification of Nano-SiO2 Particles with Silane Agent in Supercritical Carbon Dioxide D. Stojanović1, G.D. Vuković1, A.M. Orlović1, P.S. Uskoković1*, R. Aleksić1 N. Bibić2, and M.D. Dramićanin2 1

Faculty of Technology and Metallurgy, University of Belgrade, Karnegijeva 4, Belgrade, Serbia 2 Institute of Nuclear Sciences Vinča, Belgrade, Serbia. *Corresponding author email: [email protected]

ABSTRACT An organic modification of the surface of nano-SiO2 particles in a CO2 medium was performed, where CO2 of different phases, supercritical and liquid, were used as the antisolvent and a silane coupling agent γ-methacryloxypropyltrimethoxysilane was the modification reagent. The results were compared to the modification carried out in a conventional organic solvent. A considerable enhancement of dispersion and deagglomeration of nanosilica particles using supercritical CO2 was achieved. The quantity of the silane coupling agent bonded on the particle surface, obtained from thermogravimetric analyses, reached a maximum in conventional method but analysis of transmission electron microscopy micrographs and dynamic light scattering results show higher decrease of the average size of agglomerates and significantly enhanced dispersion with silane coupling agent by CO2 in supercritical state or even in liquid state. Keywords: Nano-SiO2, Supercritical carbon dioxide, Deagglomeration, Coating

93

Stojanović, Vuković, Orlović, Uskoković, Aleksić, Bibić and Dramićanin

1. INTRODUCTION Nano-SiO2 particles have many special characteristics and are widely used in the fields of composite materials, biomaterials, sensors, etc., but they have the problem that they are easily agglomerated because of their dimensions and high specific surface area. The surface of nano-SiO2 particles should be chemically modified to achieve better dispersion and deagglomeration as well as better functionality. The chemical modification of the particle surface occurs between hydroxyl groups of inorganic oxide particles [1,2] and modification agents [3,4]. Silanization is a well-known method for modification of the chemical and physical properties of solid surfaces [5,6], and it is also used as method for enhancing filler dispersion in organic fluids, reducing agglomeration, and promoting compatibility and bonding with organic matrices. There are number of studies investigating particles coating or encapsulation using supercritical CO2 (scCO2). Cao et al. [7] have used liquid CO2 (lqCO2) and scCO2 as a solvent for surface treatment of silica substrates and silica gels for microelectronic applications. Loste et al. [8] have used scCO2 for coating of hydroxyapatite and titan particles by silane coupling agents in order to obtain orthopaedic implant materials. Hydrophilic nano-SiO2 particle surface was changed into hydrophobic one by modification with scCO2 and a titanate coupling agent [9]. Supercritical antisolvent process (SAS) is based on decreasing the solvent power of a polar liquid solvent in which the substrate is dissolved, by saturating it with carbon dioxide in supercritical conditions, causing the substrate precipitation or recrystallization [10]. The important characteristic of SAS process is complete removing of the organic solvent by simple extraction by pure CO2. Supercritical CO2 is an ideal processing medium because of relative mild critical conditions (Tc = 304.1 K, Pc = 7.38 MPa) and CO2 is non-toxic, inert, relatively inexpensive and recyclable. The supercritical CO2 is widely used chemical solvent adequate for complex surface nanomaterials treatment. The low viscosity and the absence of surface tension in supercritical fluids allow the complete wetting of substrates with intricate geometries, including the internal surface of agglomerates [8]. The aim of this work was to develop a technique for ultrafine particles treatment in order to modify their surface characteristics using SAS with CO2 of different phases. The surface of nano-SiO2 particles was modified with silane coupling agent in scCO2, lqCO2 and conventional method (CV method). The modification state, structure and size of the modified particles were analyzed by transmission electron microscopy (TEM), Fourier transform infra red (FTIR) spectroscopy,

94

Dispersion and deagglomeration of nano-SiO2 particles ......

thermogravimetric analyses (TGA) and dynamic light scattering (DLS) analyses. The results obtained by scCO2, lqCO2 and CV method were compared.

2. EXPERIMENTAL Silica nanoparticles (SiO2 powder, with an average particle diameter about 7 nm, and a specific surface 380 ± 30 m2/g, Degussa Aerosil 380) were commercially available and used as received. The organosilane γ-methacryloxypropyltrimethoxysilane (Dynasylan® MEMO, Hüls, Germany [11]), with molecular formula C10H20O5Si, was used as a coating agent for the silica surface modification. Surface silanization of inorganic particles was performed by scCO2 method, lqCO2 method and by wet CV method. Surface modifications of the nano-SiO2 particles using CO2 as an antisolvent (in supercritical and liquid state) were carried out in an Autoclave Engineers Screening System shown in Fig. 1. This apparatus is designed for small batch research runs using CO2 as the supercritical or the liquid medium with maximum allowable working pressure of 41.3 MPa at 511 K. Liquid CO2 is supplied from CO2 cylinder (1) by a siphon tube. The liquid CO2 is cooled in cryostat (2) between the cylinder outlet and the pump to prevent vaporization. The pump (3) is liquid metering pump with a maximum output pressure of 41.3 MPa and an adjustable flow rate from 38 to 380 ml/h. The CO2 is pumped into the system until the required pressure is obtained. Back pressure regulators are used to set the system pressure (in extractor). The CO2 flows through the extractor vessel (4) (300 cm3) provided with agitator. The phase of the CO2 in the extractor can be adjusted by changing the thermodynamic state parameters. The CO2 continues to flow out of the extractor through the regulation valve and out to the atmosphere. The nano-SiO2 particles were dispersed in absolute ethanol and ultrasonically treated during 15 min. The particle humidity was approximately 2.5%, which is important for the initiation of silanization reaction. The surface modification of the silica particles was performed by MEMO silane in absolute ethanol added directly in colloidal sol of silica particles and absolute ethanol. Mass of the added MEMO silane against mass of the nano-SiO2 particles was in relation 1.2: 1, which is necessary for the silane monomolecular layer formation around the silica particles [11]. Subsequently, the extractor was closed and filled with CO2 under constant agitation. This was followed by increasing the temperature and pressure to the scheduled values (80°C, 16 MPa) for scCO2 method and (22°C, 8MPa) for lqCO2 method. The silanization reaction was initiated by increasing the pressure and temperature to desired values. The reaction time was 320 min till the ethanol was not completely dissolved in CO2. Finally, the temperature and pressure were decreased to atmospheric values during the careful CO2 release

95

Stojanović, Vuković, Orlović, Uskoković, Aleksić, Bibić and Dramićanin

from the system. Then the extractor was opened and the modified particles were taken out and dried in the oven at 110°C for 3 h.

Figure 1. Schematic presentation of the experimental system - (1) CO2 storage tank; (2) cryostat; (3) high pressure liquid pump; (4) extractor vessel The wet CV method was performed in the way that the nano-SiO2 previously dried for 2 h at 110°C, was dispersed in (absolute ethanol/DI water, 95:5, v/v) and treated under ultrasonic for 15 min. Colloidal silica sol was added under magnetic stirring in pre-hydrolyzed solution of the silane coupling agent (2 wt.% MEMO) at the room temperature for 10 min and the ultrasonic treatment continued for 1 h. pH condition of this solution was controlled at 4.5 by acetic acid (2.5%), which can decrease the tendency of homocondensation of MEMO silane and increase the condensation of silane with nanoparticles. The solvent was distilled off afterwards and the treated nano-SiO2 was dried at room temperature and then dried in the oven at 110° for 3 h. The state of deagglomeration of modified and unmodified silica powders were analyzed by transmission electron microscopy (TEM-Philips EM 400 microscope at 120 kV). Distribution of particles size was obtained by using system for detection of dynamic light scattering (DLS) (Brookhaven Instruments Light Scattering System) equipped with BI-200SM goniometer, BI-9000 AT correlator, temperature controller and argon-ion laser (Coherent INOVA 70C argon-ion laser). A thermogravimetric analysis (TGA) of silica powders, MEMO silane and modified particles was performed on TA Instruments SDT Q600 in a nitrogen environment at 20°C to 800°C with heating rate of 20°C/min and initial sample mass of about 5mg, in order to evaluate a quantity of MEMO silane bonded to the particle surface. FTIR spectra were obtained in transmission mode between 400 and 4000 cm-1, using spectrophotometer BOMEM, (Hartman&Braun, MB-series, Baptiste, Canada) for detection the availability of the silane groups on the surface of the nano-SiO2 particles.

96

Dispersion and deagglomeration of nano-SiO2 particles ......

3. RESULTS AND DISCUSSION The nanosilica powder with the primary particles of 7 nm in average diameter, tend to form larger agglomerates because of the high specific surface area. TEM images in Figs. 2a-c illustrate the degree of dispersion and deagglomeration of the initial nanosilica powder with following trend: unmodified nanosilica < conventional treatment by MEMO silane < scCO2 treatment by MEMO silane. Fig. 2c shows the formation of smaller agglomerates, which consists of few primary particles in the overall diameter about 70 nm.

(a)

(b)

(c)

Figure 2. TEM images show better dispersion of the nano-SiO2 particles using silane as the coupling agent: (a) unmodified nano-SiO2; (b) nano-SiO2/MEMO in CV method; (c) nano-SiO2/MEMO in scCO2 method The particle size distribution was achieved by coefficient diffusion measurement of the particles with DLS method (quasi-elastic light scattering). The particles size distribution and mean diameter values calculated by using suitable software are shown in Fig. 3a-d. Obtained results for unmodified silica powder show bimodal distribution and smaller agglomerates with dimensions 300–500 nm as well as larger agglomerates in dimensions of 1.5–2.5 µm. The powder treated by CV method and lqCO2 method show very narrow particle size distribution with mean values about 360 and 330 nm, respectively. The scCO2 method leads to bimodal distribution where agglomerate dimensions about 360 nm as well as smaller agglomerates or coated groups of few primary particles of 75-90 nm appeared, with the average diameter of 266.5 nm. DLS and TEM analyses show that scCO2 drying method is more effective concerning enhanced dispersion and deagglomeration of the nanosilica particle fillers than lqCO2 and CV method.

97

Intensity

Intensity

Stojanović, Vuković, Orlović, Uskoković, Aleksić, Bibić and Dramićanin

Diameter (nm)

Diameter (nm)

d maen = 1068.3nm

d maen = 361 .7 nm

(b)

Intensity

Intensity

(a)

Diameter (nm)

Diameter (nm)

d maen = 266.5nm

d maen = 336.2nm

(d)

(c)

Figure 3. (a) DLS of unmodified nano-SiO2 ( d mean = 1068 .3nm ); (b) DLS of nano-SiO2/MEMO in CV method ( d mean = 361.7 nm ); (c) DLS of nano-SiO2/MEMO in lqCO2 method ( d mean = 336.2nm ); (d) DLS of nano-SiO2/MEMO in scCO2 method( d mean = 266.5nm ) The FTIR spectra of MEMO silane are shown in Fig. 4a. The peaks at 2840-2945 cm-1 results from carbon-hydrogen stretching vibrations. The strong peak at 1722 cm-1 is the carbonyl C=O stretching mode, while the band at 1638 cm-1 is the vinyl C=C stretching mode. The bands at 1087 and 817 cm-1 results from Si-O-CH3 asymmetric and symmetric stretching vibrations, respectively. The spectra of unmodified SiO2 sample (Fig. 4b) show characteristic peaks at (1637, 2921 and 3446 cm-1). The modified samples (Figs. 4b,c) reveal characteristic peaks at 1720, 1735 cm-1 (C=O stretching mode) and 1637cm-1 (vinyl stretching mode) from scCO2, CV and lqCO2 method, respectively, which indicate the availability of silane groups on the surface of the nano-SiO2. The TG curve of the MEMO silane is shown in Fig. 5a. It should be noted that from the point corresponding to 110°C, sample weight significantly decreases up to the temperature of 197°C and the weight loss in this range of the temperature is 90.65%. At higher temperatures the weight is almost constant and at 800°C the remaining silane weight is 1.65%. The TGA curves of modified and unmodified SiO2 samples are shown in Fig. 5b. The weight of unmodified SiO2 sample slowly decreases with the temperature increase. The water absorbed on the particle surface desorbs completely at 150°C, and the weight loss above 150°C is due to the dehydration of the hydroxyl groups on the surface [9]. The remaining weight of unmodified SiO2 particles is 96.66% at 800°C. The TG curves lqCO2, scCO2 and CV method show the decomposition temperature at which significant weight loss of modified particles occurs. The decomposition temperatures corresponding to curves

98

Dispersion and deagglomeration of nano-SiO2 particles ......

lqCO2, scCO2 and CV method are 256°C, 256°C and 227°C, respectively. It is assumed that the physisorbed silane molecules and the remaining water or ethanol molecules desorbs completely at lower than these temperatures, and the weight loss above the decomposition temperatures is due to the chemisorbed molecules on the SiO2 particle surface [9]. The remaining weight at 800°C corresponding to TG curves lqCO2, scCO2 and CV method are 90.64%, 82.70% and 76.67%, respectively. The decomposition temperatures of modified particles are higher than the decomposition temperatures of MEMO silane and therefore MEMO silane has been chemically bonded to the SiO2 particles [9]. The lqCO2 and scCO2 treated samples are thermally more stable than CV treated sample because of the higher decomposition temperature [8].

(a)

(c)

(b)

Figure 4. (a) FTIR transmission spectra of neat MEMO; (b) FTIR transmission spectra of unmodified SiO2 (trace-SiO2) and nano-SiO2/MEMO in scCO2 method (trace-scCO2); (c) FTIR transmission spectra of nano-SiO2/MEMO in CV method (trace-CV method), nano-SiO2/MEMO in lqCO2 method (trace-lqCO2) and nano-SiO2/MEMO in scCO2 method (trace-scCO2)

99

Stojanović, Vuković, Orlović, Uskoković, Aleksić, Bibić and Dramićanin

(a)

(b)

Figure 5. (a) TG trace of MEMO silane. (b) TG traces of unmodified SiO2 (trace-SiO2), nano-SiO2/MEMO in lqCO2 method (trace-liquid CO2), nano-SiO2/MEMO in scCO2 method (trace-scCO2) and nano-SiO2/MEMO in CV method (trace-CV method) The quantity of MEMO silane bound to the particle surface (physisorbed and chemisorbed) can be estimated from the weights of samples at 800°C (Tables 1 and 2) [9,12]. Table 1. TG analysis of MEMO and unmodified nano-SiO2

Table 2. TG analysis of modified nano-SiO2

Sample

MEMO

Pure SiO2

Sample

SiO2/MEMO CV method

SiO2/MEMO scCO2

SiO2/MEMO lqCO2

Weight loss at 800°C [%]

1.65

96.66

Weight at 800°C [%]

76.67

82.70

90.64

Total amount of MEMO silane [%]

26.65

17.22

6.76

Total amount of MEMO silane [µmol/m2]

2.83

1.83

0.717

The amount of MEMO silane necessary for the monomolecular layer formation (for parallel orientation) is 3.0µmol/m2 [13], which is similar as the experimentally obtained amount of bonded MEMO silane (2.83µmol/m2) for the CV method. These amounts for the scCO2 and lqCO2 method are 1.83µmol/m2 and 0.717µmol/m2, respectively. Experimentally obtained amounts for CV, scCO2 and lqCO2 method are 94.3%, 61.0% and 23.9%, respectively, corresponding to the amount for the monomolecular layer formation.

4. CONCLUSIONS Deagglomeration and enhanced nano-SiO2 particle dispersion was performed by means of scCO2 in comparison to lqCO2, CV method and to unmodified nanoparticles. The chemical reaction between the coupling MEMO silane agent and the active groups on the nano-SiO2 surface occurred. Highest amount of MEMO silane was bonded to the particles by using the CV method. Conversely, silane bonded by the scCO2 and the lqCO2 methods were better arranged on the nanoparticle surface because

100

Dispersion and deagglomeration of nano-SiO2 particles ......

of the formation of covalent or self-assembled structures. The low processing temperature and generally higher solubility of silane in lqCO2 than in scCO2 led to the smallest amount of the surface bonded MEMO silane in the later case, but nevertheless, obtained mean particle diameter was smallest after the scCO2 treatment. These results show that CO2, in supercritical as well as in liquid state, has a high solvating power like organic solvent and it contains physical properties (low viscosity and surface tension, rapid osmosis in micro-porous materials as well as high diffusion coefficient) similar to gasses enabling very well and uniform wetting of the nano-SiO2 particle surface. The supercritical process provided a very effective approach to functionalize inorganic nanoparticles because of the enhanced diffusivity of the functional molecules in the agglomerated interparticle voids.

ACKNOWLEDGEMENTS The authors are grateful to the Ministry of Science and Environmental Protection, Republic of Serbia for the financial support through the project EUREKA E!3524.

REFERENCES 1.

Schindler, P.W. (1981). Surface complexes at oxide-water interfaces, in: M.A. Anderson, A.J. Rubin (Eds.), Adsorption of Inorganics at Solid-Liquid Interfaces, Ann Arbors Science Publishers Inc., Michigan.

2.

Yermakov, Yu.L., Kuznetsov, B.N., Zakharov, V.A. (1981). Catalysis by Suported Complexes, Elsevier Scientific Publishing Company, Amsterdam.

3.

Csogor, Z., Nacken, M., Sameti, M., Lehr, C.M., Schmidt, H. (2003). “Modified silica particles for gene delivery”, Materials Science and Engineering, Vol. 23, p.p. 93-97.

4.

Wang, H., Zhang, X.H., Wu, S.K. (2003). “A study on photophysical behaviour of silica gel nanoparticles modified by organic molecule in different mediums”, Acta Chimica Sinica, Vol. 61, p.p. 1921-1929.

5.

Matijašević, J., Hassler, N., Reiter G. and Fringeli, U. P. “In situ ATR FTIR monitoring of the formation

of

functionalized

monolayers

on

germanium

substrate:

I.

From

7-

Octenyltrichlorosilane to 7-Carboxylsilane”, Journal of the American Chemical Society, to be published. 6.

Plueddemann, E.P. (1991). Silane Coupling Agents, 2nd ed., Plenum Press, New York.

7.

Cao, C., Fadeev, A.Y., McCarthy, T. J. (2001). “Reactions of organosilanes with silica surfaces in carbon dioxide”, Langmuir, Vol. 17, p.p. 757-761.

101

Stojanović, Vuković, Orlović, Uskoković, Aleksić, Bibić and Dramićanin

8.

Loste, E., Fraile, J., Fanovich, M.A., Woerlee, G.F., Domingo, C. (2004). “Anhydrous supercritical carbon dioxide method for the controlled silanization of inorganic nanoparticles”, Advanced Materials, Vol. 16, p.p. 739-744.

9.

Wang, Z.W., Wang, T.J., Wang, Z.W., Jin, Y. (2006). “Organic modification of nano-SiO2 particles in supercritical CO2”, Journal of Supercritical Fluids, Vol. 37, p.p. 125-130.

10. Jung, J., Perrut, M. (2001). “Particle design using supercritical fluids: Literature and patent

survey”, Journal of Supercritical Fluids, Vol. 20, p.p. 179-219. 11. Dynasylan® Adhesion promotores, Hüls, Technical Information. 12. Wang, Z.W., Wang, T.J., Wang, Z.W., Jin, Y. (2006). “The adsorption and reaction of a titanate

coupling reagent on the surfaces of different nanoparticles in supercritical CO2”, Journal of Colloid and Interface Science, Vol. 304, p.p. 152-159. 13. Posthumus, W, Magusin, P.C.M.M., Brokken-Zijp, J.C.M., Tinnemans, A.H.A., Van der Linde, R.,

(2004).

“Surface

modification

of

oxidic

nanoparticles

using

3-

methacryloxypropyltrimethoxysilane”, Journal of Colloid and Interface Science, Vol. 269, p.p. 109-116.

102

Industrial Engineering Research, Vol. 4 (2) 103-112 (2007) © 2007 Institute of Industrial Engineers (Hong Kong)

ISSN 1027-2208

Reliability Comparison of Rigid Flex Printed Circuit using Various Materials and Design Build-ups S.Q. Huang and K.C. Yung Department of Industrial and System Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China

ABSTRACT Rigid Flex Printed Circuits (RFPC) has been widely used not only in military and aerospace applications but also for commercial electronic products because of lightweight, small volume and high reliability. Mismatching of the coefficient of thermal expansion brings reliability problems in thermal environments. Different materials and build-ups will result in different reliabilities. In this study, RFPC built with three different bonding materials and three types of build-up were fabricated, and reliability tests were conducted. Weibull analysis was used to compare the reliability, and a numerical model with PTH barrel copper explains the experiment results. High glass transition temperature (Tg) and low coefficient of thermal expansion (CTE) material is preferred in RFPC manufacture, and using build-up TypeⅢ can improve the reliability of RFPC. Keywords: Rigid Flex Printed Circuit (RFPC), CTE, Reliability

1. INTRODUCTION Rigid Flex Printed Circuits (RFPC) offer tremendous benefits to electrical products since they reduce weight and volume, simplify the assembly process, while increasing the reliability of the interconnect and saving costs. RFPC have historically been used in aerospace and military applications which require lower weight and smaller volume, as well as a more reliable performance. Driven by the trend of miniaturization, RFPC are also widely used in consumer electric products, such as laptop computers, bending and sliding cell phones, digit cameras, medical devices etc.

103

Huang and Yung

As RFPC are hybrid circuits, the combination of rigid and flexible materials brings technological and process challenges. Problems have been encountered because of the relatively high coefficient of thermal expansion (CTE) of typically used insulator materials, such as acrylic adhesives, utilized in the construction of rigid flex circuits. When RFPC built with these materials are subjected to high temperatures in thermal stress testing, as in solder reflow and the like, the flexible type of material expands at a much higher rate than the other materials. Normally the rate of expansion of the acrylic adhesive is about 30 percent, and that of copper is about 4 percent. This kind of mismatch of z-axis CTEs in RFPC can result in induced stresses on the plating through hole (PTH) interconnections during cyclical thermal stress testing. Once the tensile stress caused by z-axis expansion exceeds the plated copper elastic limits, failures like lifted pads and barrel copper crack occur. Using different types of material or trying special build-ups may reduce the reliability effect caused by the mismatch of CTE. In the important early studies, material considerations [1-3] for rigid flex circuit were explored and various build-up methods [4-6] were also demonstrated to improve the reliability of RFPC. However, research related to the effect on reliability induced by different kind of materials and various build-ups is still inadequate. This paper presents the impact on reliability of rigid flex printed circuit boards using various materials and different types of build-ups. Rigid flex circuit boards are fabricated and reliability tests are conducted. The influence of material and build-up on RFPC is concluded by a comparison of the reliability test results. The findings are helpful to provide guidelines for RFPC designers and manufacturers.

2. EXPERIMENTAL 2.1 Materials and build-ups

Two types of flexible laminates and three types of bonding materials are utilized and tested in the study. The adhesiveless based flexible copper clad laminate(FCCL) is a high performance laminate and more capable for sophisticated RFPC applications [7-8]. The CTE of adhesiveless laminates is lower than adhesive based FCCL, since it eliminates the traditional bonding adhesive by using direct sputtering or casting technology. In this study, adhesiveless and adhesive laminates are both used to build RFPC and their performance will be compared. The other and most significant aspect that will affect RFPC reliability is the mismatched CTE of rigid and flex materials, especially that of the bonding sheet which is used to combine the rigid and flexible circuits. Acrylic adhesive is the most common bonding sheet for flexible printed circuits; however, its CTE is the highest. To reduce the expansion in a thermal environment, low CTE bonding sheets, modified acrylic adhesive with woven glass fabric, epoxy adhesive as well as no flow prepreg are tried and evaluated.

104

Reliability Comparison of Rigid Flex Printed Circuit using Various Materials……

Three types of build-ups, named typeⅠ, typeⅡ and typeⅢ are depicted in Figure1, Figure2 and Figure3, respectively. TypeⅠis a typical six layers rigid flex build-up in which all areas contain adhesive, and in this study, epoxy based adhesive is utilized. TypeⅡuses glass fabric impregnated adhesive (GFIA) and no-flow prepreg (1080) as bonding materials instead of the traditional acrylic or epoxy adhesive. Adhesive based laminates are used in both types, whereas adhesiveless based FCCL is used in the build-up typeⅢ. Another difference between typeⅡ and typeⅢ is the coverlayer, and in the build-up typeⅡ, the coverlayer is extended throughout the whole panel, however, in typeⅢ, the rigid section of RFPC is off limits to the coverlayer. Table 1 presents the matrix of the construction of various materials and build-ups.

Coverlay

0.25mm FR4

Adhesive

Adhesive

0.25mm FR4

0.25mm FR4

1080/GFIA

Adhesive

Coverlay Adhesive

0.25mm FR4 1080/GFIA

Polyimide

Polyimide

Adhesive

Adhesive

1080/GFIA

1080/GFIA

0.25mm FR4

0.25mm FR4

0.25mm FR4

0.25mm FR4

Figure 1 Lay-up construction of Type I

Figure 2 Lay-up construction of Type II

The CTE of bonding materials is critical for this study. Published data of the thermo-mechanical properties can be obtained but are inadequate for

0.25mm FR4 1080/GFIA

Coverlay Adhesive

0.25mm FR4 1080/GFIA

each material used in the build-up. Therefore, for better

understanding

and

comparing

the

Polyimide

performance of each material used, the CTE value must be identified. A separate experiment is therefore required to measure the glass transition temperature (Tg) and CTE of each bonding material.

1080/GFIA

1080/GFIA

2.2 Samples RFPC with a six-layer build-up structure were

0.25mm FR4

0.25mm FR4

fabricated

by

the

sequential

lamination

Figure 3 Lay-up construction of Type III

and

subtractive method. The FCCL is imaged and etched to form a circuit pattern and a coverlayer is then laminated to provide electric insulation and environmental protection. The coverlayer surface was scrubbed with pumice slurry to increase adhesion. The etched rigid board and pre-cut bonding sheet were then applied to the flexible board and laminated with a high temperature vacuum laminator. The excellent chemical resistance of the adhesive to typical chemical cleaning solutions, makes it hard for

105

Huang and Yung

the

traditional

chemical

solutions to

remove smears resulting from hole drilling

Table 1 Matrix of materials and build-up constructions

process. Therefore plasma etching was used

as

a

desmearing

process

with

optimized parameters according to previous

Bonding sheet Specimen (ABC)

Buildup (A)

FCCL

1E1

typeⅠ (1)

Adhesiv e based

research [9]. To measure the CTE and Tg of each

material type (B)

qty (C)

Epoxy adhesive(E)

1

Modified acrylic(M)

1

Modified acrylic(M)

2

No flow prepreg(N)

1

bonding material, several sheets of pure

2M1

epoxy

fabric

2M2

impregnated adhesive (GFIA) and no-flow

2N1

prepreg

and

2N2

No flow prepreg(N)

2

completely cured. The pressed adhesives

3M1

Modified acrylic(M)

1

Modified acrylic(M)

2

No flow prepreg(N)

1

No flow prepreg(N)

2

based

adhesive,

(1080)

were

glass laminated

typeⅡ (2)

o

were baked at 105 C for 1 hour before

3M2

testing, in order to remove the moisture.

3N1

The specimens were cut into 6.35 mm X

3N2

typeⅢ (3)

Adhesiv e based

Adhesiv eless based

Cover lay in rigid area

Yes

Yes

No

6.35 mm sections and were tested in a Thermo-mechanical Analyzer (TMA). 2.3 Reliability tests 2.3.1 Thermal Shock Test

Thermal shock testing aims to determine the physical endurance of RFPC to sudden changes of temperature. The specimen is exposed to a series of high and low temperature excursions to cause physical fatigue. In this study, a cycling test was employed at conditions of -65 oC/125 oC, with a 15 minutes dwell for each temperature, in a thermal shock test chamber model called the Climats Spirale. Terminal resistance was recorded every 5 cycles up to 1000 cycles. A specimen was classified as failed when the resistance value varied more than +/-10%. 2.3.2 Cross-section observation The cross-section was examined to find and analyze the failure of the PTH, after the cyclic fatigue thermal shock test. 2.3.3 TMA Test Thermal Mechanical Analysis (TMA) was used to determine the glass transition temperature and the thermal expansion of dielectric materials used in RFPC. A TMA model, Perkin Elmer TMA 7, was used to measure the Tg and CTE of each bonding material used in the experiments.

106

Reliability Comparison of Rigid Flex Printed Circuit using Various Materials……

3. RESULTS AND DISCUSSION 3.1 Test results

The cycle numbers when the failure mode was observed for each of the four specimens are presented in table 2 and the average cycles are depicted in Figure 4. TypeⅢ with no-flow prepreg can withstand over 1000 cycles without failure, whereas failure was observed at less than 100 cycles using a modified acrylic adhesive with typeⅢ build-up. No-flow prepreg also showed a better performance than modified acrylic adhesive in the typeⅡ construction. Figure 5 presents the TMA results of each bonding material used in the experiments. No-flow prepreg relatively has the highest Tg and the lowest CTE, and these properties can improve the reliability of RFPC in high temperature environments. Glass fabric impregnated adhesive, a modified acrylic, still shows very low Tg and high CTE, and has a better performance than pure acrylic adhesive. Cycle numbers

of failure in Thermal Shock tests

1000 900 800 700 600 500 400 300 200 100 0

1E1

2M1

2M2

2N1

2N2

3M1

3M2

3N1

3N2

Figure 4 Cycle numbers of failure in thermal shock

Figure 5 Tg and CTE test results for bonding material

On the basis of the above experiment results, the dielectric material and type of build-up play prominent roles in the reliability of RFPC. TypeⅢ with no-flow prepreg, which is the combination of adhesiveless base laminate and low CTE bonding sheet, showed the best performance in this study. The low Tg and high CTE of modified acrylic, however, resulted in a very short lifetime even using adhesiveless FCCL (TypeⅢ) as the base core material. The differences in life results from the bonding sheets are also exhibited in typeⅡ construction. In summary, adhesiveless FCCL presents a better performance than adhesive based laminates, as a high Tg and low CTE result in a longer thermal cycle than low Tg and high CTE bonding sheets. Since a single sheet bonding material expands less than two sheets, those RFPC using a single sheet adhesive exhibited better performance than those using a two layer adhesive.

107

Huang and Yung

Table 2 Results of thermal shock test Type

TypeⅠ

TypeⅡ

TypeⅢ

Specimen

1E1

2M1

2M2

2N1

2N2

3M1

3M2

3N1

3N2

Cycle for specimen 1

85

45

30

145

60

55

30

1000

1000

Cycle for specimen 2

130

55

45

170

75

90

45

1000

1000

Cycle for specimen 3

155

80

70

220

110

110

50

1000

1000

Cycle for specimen 4

170

120

75

225

195

125

90

1000

1000

Average Cycle

135

75

55

190

110

95

55

1000

1000

3.2 Weibull analysis

Weibull analysis is the most popular method of analyzing and predicting failures. In this study, thermal cycles (time to failure) are recorded for four specimens. Plotting failure data on Weibull paper needs an estimation of the cumulative percent failure F(ti) for each observed failure time ti. Since the sample size is small in this study, Bernard’s approximation is utilized for an estimate of F(ti):

F (t i ) ≈ where i denotes

i − 0.3 n + 0.4

(1)

the rank of an observation when data are sorted in ascending order, and n is the

sample size. Figure 6 exhibits the Weibull probability

Probability Plot of 1E1, 2M1, 2M2, 2N1, 2N2, 3M1, 3M2, 3N1, 3N2 Weibull

plot of each specimen’s reliability. As

99

cases with over 1000 cycles were not

analysis. Predicted cycle numbers at a probability of failure 63.2%, which is called characteristic life, are 207, 151.4, 127.8, 107.9, 86.1, 63.7 and 63.3 for specimens 2N1, 1E1, 2N2, 3M1, 2M1, 3M2

and

2M2,

respectively.

The

reliability of RFPC is decreasing from

Probability of Failure

recorded, the data for specimens 3N1 and 3N2 cannot be plotted in the Weibull

Variable 1E1 2M1 2M2 2N1 2N2 3M1 3M2 3N1 3N2

90 80 70 60 50 40 30 20 10 5 3 2 1

10

100 Cycle numbers

1000

Figure 6 Weibull plot of each specimen

the no-flow prepreg base to the epoxy base and to the modified acrylic based adhesive, while the CTE is increasing. Although the reliability of RFPC depends on many factors, the Weibull analysis indicates that the CTE of the bonding materials is the most effective way to predict RFPC reliability. RFPC built with smaller CTE bonding adhesives exhibited higher reliability than those with a higher CTE bonding material.

108

Reliability Comparison of Rigid Flex Printed Circuit using Various Materials……

3.3 PTH barrel stress model PTH barrel failure was observed in the cross section of those failed RFPCs after thermal cycling testing, as Figure 7-8 shows. To analyze and simulate the PTH barrel failure, considerable research using analytical modeling [10-16] of the stresses and strains in the plating hole during thermal excursions has been carried out. The most well known PTH reliability models for cyclic fatigue are the Coffin-Manson model and the Engelmaier model which are described in IPC technical report [13, 14]. These two models are used here to discuss the experiment results. FR4

FR4 No-Flow 1080

GFIA FPC and Coverlay FPC with coverlay Figure 7 Micro-section of failed sample 2M2

Figure 8 Micro-section of failed sample 2N1

The Coffin-Manson model aims to compute the PTH barrel stress induced by the thermal loading. This stress model is based on a uniform tensile stress distribution along PTH copper barrels. Both linear elastic and linear plastic stress models are considered and expressed as Equation 4 and 5 [14]: σ =



− α Cu ) ∆ T ⋅ A E E A E E E + A Cu E Cu

E

⎡ ⎢ (α σ = ⎣

AE ≅

ACu ≅

E

−α

Cu

)∆ T + S Y AE E

π

[(h + d ) 4 π

[d 4

2

E

2

E

−d2

E

Cu

(2)

, for σ ≤ S Y

E Cu − E ' Cu ⎤ ⎥ AE E E Cu E ' Cu ⎦ + A Cu E ' Cu

E

E ' Cu , for σ > S Y

]

− (d − 2t ) 2

(3)

(4)

]

(5)

Where: σ = PTH barrel stress; SY =PTH barrel copper yield strength; E= subscript denoting epoxy; Cu=subscript

denoting PTH barrel copper; α =Coefficient of thermal expansion; ∆T=temperature range

of thermal cycling; AE=influence area of PCB; ACu=cross-sectional area of PTH copper barrel E=modulus of elasticity; E’=modulus of plasticity; h =thickness of PCB; d =drilled PTH diameter; t =thickness of PTH copper barrel PTH schematic and design parameters are shown in Figure 9. Stress can be computed by the above equation, however the strains are more important than the stresses in evaluating the cyclic fatigue life. Therefore, the model for the reliability prediction was developed based on the stress model. The fatigue life can be predicted by iteratively solving Equation 8 [14].

109

Huang and Yung

N

− 0 .6 f

∆ε = ∆ε =

D

0 . 75 f

σ E Cu

Su + 0 .9 × E Cu

⎡ eDf ⎢ ⎢⎣ 0 . 36

⎤ ⎥ ⎥⎦

0 . 1785

log

10 N

5 f

− ∆ε = 0

(6) (7)

, for σ ≤ S Y

SY σ − SY + , for σ > S Y E Cu E ' Cu

(8)

Where: Nf =expected mean fatigue life, cycles to failure, Df =fracture strain, fatigue ductility of PTH barrel copper, Su =tensile strength of PTH barrel copper, ∆ε =total cyclic strain range To simplify the calculation, a graphical solution as an alternative to Equation 6 is shown in Figure 10. From this Manson-coffin plot, it is easier to predict the cyclic life for a cyclic strain range of copper with the assumed properties. Therefore, the model can be used to explain the experimental result, even when, in reality, the construction of RFPC is more complicated. Higher CTE variance results in higher stress (Equation 2 and 3), and higher stress will increase the strain range (Equation 7 and 8), and a higher strain range means lower cyclic life according to Equation 6 or the graphical prediction (Figure 10). Therefore a higher CTE of the bonding material will result in lower reliability, and no-flow prepreg is preferred in this study. One can also use these equations to find the relationship between cyclic life and layer thickness as well as the thickness of the plating hole. Both higher dielectric thickness (h) and lower plating thickness (t) will result in higher stress (according to Equation 4 and 5), which will eventually reduce the cyclic life. This explains why the reliability of RFPC using a single sheet bonding material is better than that using 2 sheets of bonding material in this experiment.

Figure 10 Manson-Coffin cyclic strain vs. fatigue life plot for electrodeposited copper

Figure 9 Schematic and Design Parameters of PTH

Three types of bonding material are used in this study, however, the Tg and CTE vary for each material. To compare these materials, it is necessary to develop the concept of an effective CTE [15] which is defined as:

⎡ Tg − Tr ⎤ ⎡ T − Tg ⎤ + CTE 2 × ⎢ ⎥ ⎣ T − Tr ⎦ ⎣ T − Tr ⎥⎦

Effective CTE (ppm/oC)= CTE1 × ⎢

(9)

where, T= maximum temperature (250oC); Tr= reference temperature (25oC); Tg = glass transition

110

Reliability Comparison of Rigid Flex Printed Circuit using Various Materials……

temperature; CTE1= coefficient of thermal expansion before Tg; CTE2= coefficient of thermal expansion after Tg. The Tg has been taken into consideration by the effective CTE concept which is more useful in comparing materials than by CTE only. Table 3 shows the effective CTE of each bonding material used in this study and is consistent with the experiment results. Lower effective CTE results in good reliability, therefore, to produce more reliable RFPC, high Tg and low CTE material is preferred. Table 3 Effective CTE of bonding materials (oC)

Tg

CTE(Tg) (ppm/oC)

Effective CTE (ppm/oC)

46

168

348

331

Modified adhesive

50.8

285

397

384

No flow prepreg

128.8

55.2

332.4

205

Bonding material Epoxy adhesive

4. CONCLUSION A study of the effect of materials and build-ups on RFPC reliability was conducted. Typically six layers of RFPC using three different bonding materials and three build-ups were fabricated respectively. Reliability studies were implemented and thermal properties of each bonding material, epoxy, modified acrylic and no-flow prepreg, were measured. No-flow prepreg exhibited the highest Tg and the lowest CTE. TypeⅢ build-up with no-flow prepreg showed sufficient reliability of more than 1000 cycles, whereas all the other combinations exhibited thermal cycles from 50 to 200. Weibull analysis was implemented to compare the reliability of each specimen. The prediction from the Weibull plot shows TypeⅢ build-up, with no-flow, as the most reliable. A numerical model of the PTH barrel stress was used to explain the experiment results. Based on the experiment results, data analysis and the PTH model, it can be concluded that: (1) High Tg and low CTE materials result in high reliability of RFPC; (2) Build-up TypeⅢ is preferred, which eliminates adhesive as much as possible in order to reduce the stress caused by adhesive expansion in a thermal environment; (3) Reducing the thickness of the dielectric and increasing the PTH barrel copper thickness can improve the reliability of RFPC.

ACKNOWLEDGEMENT The work described in this study is supported by a Central Research Grant from The Hong Kong Polytechnic University (RGNW).

111

Huang and Yung

REFERENCES [1] Keating, J., Larmouth, R. and Bartlett, G.(1999), “Redefining the Cost/Performance Curve for Rigid Flex Circuits”, IPC Printed Circuits Expo, March 14-18. [2] Shepler, T.H.(1988), “Material Consideration For Flex-Rigid Multilayers”, 2nd International SAMPE Electronics Conference, June 14-16, P491-502. [3] Tsunashima, E. et al (1990), “An Aramid/ Epoxy Substrates Increased the Facility of Rigid-flexible Substrates for High Performace Use”, Proceeding of Electronic Components Conference, V2, P901-906. [4] Rapala-Virtanen, T. and Jokela, T.(2005), “New materials and build-up constructions for advanced rigid-flex PCB applications”, Circuit World, Vol.31, Number 4, P21-24. [5] Wille, M.(2006), “Evolution of a wiring concept – 30 years of flex-rigid circuit board production”, Circuit World, Vol. 32, Number 2, P12-17. [6] Shorroc, B. k and Netting, K.(2001), “Focus on Multilayer circuits”, Printed Circuit Europe, 4th Quarter, P47-51. [7] Nakao, T.(1989), “Proceedings of the adhesiveless polyimide laminate as a circuitry material, Electronic Manufacturing Technology Symposium”, Japan IEMT Symposium, Sixth IEEE/CHMT International, 26-28 April 1989, P132 – 135. [8] Fronz, V.(1991), “Adhesiveless Fleixble Copper-polyimide Thin Film Laminates”, Circuit World, Vol. 17 No. 4, P15-18. [9] Yung, K.C., Wang, J., Huang, S.Q., Lee, C.P. and Yue, T.M.(2006), “Modeling the Etching Rate and Uniformity of Plasma-aided Manufacturing Using Statistical Experimental Design”, Materials and Manufacturing processes, 21, P899-906. [10] Oien, M.A.(1976), “A Simple Model for the Thermo-Mechanical Deformations of Plated-Through-Holes in Multilayer Printed Wiring Boards”, Proceedings 14th Annual IEEE Reliability Physics Symposium, P121-1 28. [11] Hagge, J.K.(1980), “Strain-Induced Failures in Plated-Through-Holes”, Proceedings Printed World Conference, 1980. [12] Gray, F.(1987), “Reliability, Thermal, and Thermo-Mechanical Characteristics of Polymer on Metal Multilayer Boards”, Proceedings Printed Circuit World IV Conference, Tokyo, 1987 [13] “IPC-TR-579 Round Robin Reliability Evaluation of Small Diameter Plated Through Holes in Printed Wiring Boards”, IPC Technical Report, September 1988. [14] Engelmaier, W.(1986), IPC-TR-484 Results of IPC Copper Foil Ductility Round Robin Study, IPC Technical Report, April 1986. [15] Iannuzzelli, R.(1991), “Predicting plated-through-hole reliability in high temperature manufacturing processes”, Proceedings of Electronic Components and Technology Conference, 41st, 11-16 May 1991, P410 – 421. [16] Xie, J.S. et al(2006), “A PTH Reliability Model Considering Barrel Stress Distributions and Multiple PTHs in a PWB”, Reliability Physics Symposium Proceedings,. 44th Annual., IEEE International March 2006 P256 – 265.

112

Industrial Engineering Research, Vol. 4 (2) 113-122 (2007) © 2007 Institute of Industrial Engineers (Hong Kong)

ISSN 1027-2208

A Data Processing Algorithm for Digital 3D Motion Analysis C.P. Tsui1*, C.Y. Tang1 and Y.M. Wong2 1

Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong. 2 Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong.

ABSTRACT Motion analysis system has been widely used in tracking and analyzing human body motion for visualizing a new engineering design and ergonomic analysis of an existing design. In this paper, a data processing algorithm was proposed to enhance noise immunity of the motion analysis system so that a smooth trajectory of a captured object could be generated. The algorithm has taken the proportional-integral-derivative (PID) approach. The integral process dealed with removal of unwanted vibration of raw data and determination of the displacement curve of the captured object by using the quadratic spline interpolation method, while the derivative process was used to obtain other motion output such as the velocity of the object. In order to illustrate the applicability of the proposed algorithm, a simple pendulum experiment has been carried out to serve as an illustrative example. After implementing the PID data processing algorithm, the estimated velocity curve of the simple pendulum was found to have high noise immunity as compared with the raw data.

Keywords: Motion analysis system, data processing algorithm, spline curve

1.

INTRODUCTION

Motion analysis systems can be effective in creating realistic dynamic human animation for movie production, and recording quantitative data of human movement for ergonomic, biomechanical or medical research and evaluations (Durward et al 1999; Richards 1999). For the optical-typed motion

113

Tsui, Tang and Wong

analysis system, it integrates reflective markers that placed on tested subjects, and requires at least three synchronized video cameras, each of which is equipped with an infra-red source that is aligned to illuminate the field of view for that camera. Additional computerized hardware and software are required with each camera view in order to calculate a 3D coordinate position of each marker (Ehara et al 1995).

Although the motion analysis systems have become popular over the past couple of decade for various applications and the technologies have been advantaging. There is still a few noise sources associated with the systems including false reflection from adjacent environment or camera combination effect that causes discontinuities in reflective markers’ trajectory data (Brown and Stanhope 1995, Ehara et al 1997). For enhancing the noise immunity of the motion analysis systems, data filtering could be adopted but selection of a suitable frequency is required (Elliott et al 2007). This calls for an algorithm to increase noise immunity of the motion analysis system without the need of selecting a filtering frequency. Although polynomial interpolation method can be used to obtain the trend of a data set, the complex motion trajectory of the captured object may not be approximated through this method. Quadratic spline interpolation method is more preferred to the polynomial interpolation method because the complex motion curve shapes can be represented by several number of piecewise low degree polynomial curves (Bartels et al 1987).

The present study originates from the concept of proportional-integral-derivative algorithm (Liptak 1995) which is a closed-loop feedback mechanism used in industrial control systems. A data processing algorithm is proposed and also consists of three processes: proportional, integral and derivative, which will operate in sequence from the proportional to the derivative process. Unlike the traditional algorithm, the output is not summation of these three terms. The objective of the proposed algorithm was to enhance the noise immunity of the motion analysis system. Hence smooth trajectories of displacement and velocity of the captured object could be determined. An example would be provided to illustrate the applicability of the proposed algorithm.

2.

PID DATA PROCESSING ALGORITHM

In order to process the data captured by a 3D motion analysis system, an algorithm which consists of three processes: proportional, integral and derivative as shown in Figure 1 is proposed. The position of a point captured by an optical or electronic means will be digitized in terms of x, y, z coordinates. These data will then be scaled by a proportional factor depending on the unit system chosen.

114

A Data Processing Algorithm for Digital 3D Motion Analysis

Proportional

Image

Integral

Motion data output

Derivative

Figure 1 Three processes of PID algorithm

When the raw data have large variance, the integral process is required to remove the unwanted vibration. Firstly, the raw data of displacement was divided into a number of regions with a time interval function

∆t , where ∆t u (t )

is equal to ti +1 − ti such that i = 0, 1, 2 …., n and t0 = 0 . The displacement

within each time interval

displacement value,

u

ti +

∆t 2

∆t

are integrated and divided by ∆t to obtain a mean

as shown in Figure 2, which, for simplicity, can be determined by using a

central moving average method.

u

ti +

∆t 2

=

1 t ∫ u (t )dt ∆t t i +1

(1)

i

Therefore, if there are n number of time intervals, there will also be n number of

coordinates of ( t 0

+

∆t ∆t ∆t , u ∆t ), ( t1 + , u ∆t ), ….. , ( t n −1 + , u 2 t +2 2 t+2 2 t 0

1

n −1 +

∆t 2

u

ti +

).

u

an t 2 + bn t + cn

( tn , un )

( t n −1

u(t) ( t1

a0 t 2 + b0 t + c0 ( t 0 ,u 0 )

( t0

+

+

∆t ,u 2 t

n −1 +

∆t , u ∆t ) 2 t+2 1

+

∆t , u ∆t ) 2 t +2 0

t Figure 2. Spline-fitted curves of displacement u versus time t

115

∆t 2

)

∆t 2

values, with

Tsui, Tang and Wong

Now, the following n+1 number of quadratic splines as shown in Figure 2 are fitted through previously determined n mean values as well as initial data point ( t 0 ,u 0 ) and final data point ( t n , u n ) in the data set.

u (t ) = a0 t 2 + b0 t + c0 ,

t0 < t < t0 +

∆t 2

(2)

u (t ) = a1t 2 + b1t + c1 ,

t0 +

∆t ∆t < t < t1 + 2 2

(3)

u (t ) = a2 t 2 + b2 t + c2 ,

t1 +

∆t ∆t < t < t2 + 2 2

(4)

………

u (t ) = an −1t 2 + bn −1t + cn −1 ,

tn−2 +

u (t ) = an t 2 + bn t + cn ,

∆t ∆t < t < t n −1 + 2 2

t n −1 +

∆t < t < tn 2

(5) (6)

where ai , bi and ci are coefficients of these equations and unknowns required to determine. As each quadratic spline goes through two consecutive data points, this condition produces the following 2(n+1) number of equations.

u (t 0 ) = a0 t 0 + b0 t 0 + c0 2

∆t ⎞ ∆t ⎞ ∆t ⎞ ⎛ ⎛ ⎛ u ⎜ t 0 + ⎟ = a0 ⎜ t 0 + ⎟ + b0 ⎜ t 0 + ⎟ + c0 2⎠ 2⎠ 2⎠ ⎝ ⎝ ⎝

(7)

2

(8)

………

∆t ⎞ ∆t ⎞ ∆t ⎞ ⎛ ⎛ ⎛ u ⎜ t n−1 + ⎟ = an ⎜ t n−1 + ⎟ + bn ⎜ t n −1 + ⎟ + cn 2⎠ 2⎠ 2⎠ ⎝ ⎝ ⎝ 2

u (t n ) = an (t n ) + bn (t n ) + cn 2

(9) (10)

The first derivatives of two consecutive quadratic splines are continuous at their interior point. For the first interior point, differentiating Eqs.(2) and (3) gives

2a0t + b0 = 2a1t + b1

(11)

Similarly, at other interior points, the following relations can be found

2a1t + b1 = 2a2t + b2

(12)

…………

2an −1t + bn −1 = 2ant + bn

116

(13)

A Data Processing Algorithm for Digital 3D Motion Analysis

Therefore, there are n such above equations and total number of equations become 3n + 2 (i.e. n + 2(n+1)). Assuming that the first spline is linear, that is a0 = 0 . Solving 3n+3 equations with 3n+3 unknowns (i.e. ai , bi , ci where i = 0, 1, 2 …… n), the splines at different time t can be determined.

For the last stage of the algorithm, each spline will then be differentiated to determine the velocities of the captured position at an object at different time t.

t0 < t < t0 +

v(t ) =

du (t ) = 2a i + bi t , dt

ti +

∆t 2

∆t ∆t < t < ti +1 + 2 2

tn −1 +

(14)

∆t < t < tn 2

where i = 0, 1, 2 …… n-2.

3.

ILLUSTRATIVE EXAMPLE

In order to verify the proposed algorithm, a simple pendulum experiment has been carried out and the proposed algorithm has been implemented to determine the displacement and velocity of the captured steel ball in the experiment for one cycle. 3.1

Experimental Details

The setup for the simple pendulum experiment is shown in Figure 3. A 78-cm tall wooden rack under 10 kg barbell weights at the top was placed in the center of laboratory equipped with Vicon 3D system (Model 370, Oxford Metrics Ltd., UK). A steel ball with a weight of 228.5g and a radius of 2 cm was attached to an inelastic string of 61 cm long. Another end of string was attached to a fixed point of the rack to form a simple pendulum. The steel ball was covered with a reflective material (Scotchlite, 3M, USA) for being recognized by the 3D system.

String Steel ball

z

ө

z

x

v y

y

x Figure 4. Schematic setup for simple pendulum experiment

Figure 3. Setup for simple pendulum experiment

117

Tsui, Tang and Wong

Under the six cameras of the Vicon system, the simple pendulum was initiated manually and the angle of swing was kept at 10 degrees in each direction (i.e. θ 0 = 10° ). By using the system with the sample rate of 60 frames per second, the x, y and z coordinates of the steel ball as shown in Figure 3 and 4 were captured. Assuming that the swinging of the ball occurred at the x-z plane only, the incremental displacement

∆u

of the steel ball from time t i to ti +1 was calculated by the following

simple expression:

(x

∆ut = i +1

t i +1

− xt

) + (z 2

i

t i +1

− zt

)

2

i

where i = 0 to p - 2 in which p is the total number of coordinate points.

∆u

is taken to be positive

when the steel ball moves from left to right, and vice versa.

3.2

Implementation details and results

Following the simple pendulum experiment, experimental incremental displacement of the steel ball for one cycle can be plotted in Figure 5. In order to obtain the result with high noise immunity, the proposed algorithm has been used.

Firstly, the displacement of the ball at different time was determined by summing up the corresponding incremental displacement

∆u

up to a certain time, and then required to divide into a

number of data sets. In order to determine the time interval for each data set, a ten-to-one rule could be used. The time interval for each data set should not be greater than 0.15 seconds, and was selected to be 0.08 ~ 0.11 seconds for

obtaining each mean value at each turning point. Secondly, the central

moving average method was used to determine a mean displacement value

u ∆t for each data set. For 2

example, if there are m + 1 data points in the data set at the interval [ t0 , tm ], the mean of this data set can be determined by

u ∆t = 2

m 1 ∑ ut (m + 1) i =0

i

The mean values of other data set could be evaluated similarly. Thus, the original large amount of displacement data with high variance could be represented by a few number of data points (in this case, 18 points) as shown in Figure 6.

118

A Data Processing Algorithm for Digital 3D Motion Analysis

0.008

Incremental Displacement (m)

0.006 0.004 0.002 0.000 0.0

0.5

1.0

1.5

2.0

-0.002 -0.004 -0.006 -0.008 Time (s)

Figure 5. Experimental incremental displacement

∆u of the steel ball at different time

Implementing the “integral” process of the proposed algorithm, all the data points shown in Figure 6 were connected by using 17 spline curves to become a form as shown in Figure 7. For example the equations of the spline curves 1 and 2 are

u (t ) = a1t 2 + b1t + c1

u (t ) = a0 t 2 + b0 t + c0

and

respectively, where the coefficients of these equations are listed in Table 1,

By applying the “derivate” part of the algorithm, all segments of the curve in Figure 7 were differentiated to evaluate the velocity of the ball at different time. It can be observed from Figure 8 that there are large fluctuations in the velocities of the ball has been removed by using the PID algorithm. Moreover, the estimated velocities of the ball are found to agree very well with the theoretical results so that the experimental work has been carried out correctly. The theoretical velocity, vT of steel ball for a simple pendulum test is given by:

vT = Lωθ 0 sin ωt where

ω

is the angular frequency of the simple pendulum and L is the length of the string attached to

the steel ball.

119

Tsui, Tang and Wong

0.12

Displacement (m)

0.08 0.04 0.00 0.0

0.5

1.0

1.5

2.0

-0.04 -0.08 -0.12 Time (s)

Figure 6. Mean displacement value

u ∆t

of the steel ball at each selected time interval

2

Table 1. Coefficients of 17 quadratic spline curves shown in Figure 7 i

0

1

2

3

4

5

6

7

8

ai

0

1.0175

0.4064

0.5822

-0.0988

0.211

-0.7416

-0.3358

-1.1973

bi

0.04

-0.0282

0.1144

0.0383

0.4695

0.2218

1.1425

0.6692

1.8465

ci

-0.1048

-0.1036

-0.1119

-0.1037

-0.172

-0.1224

-0.3449

-0.2069

-0.6091

i

9

10

11

12

13

14

15

16

-

ai

-0.4475

-1.0853

-0.1117

-0.4437

0.4742

0.0723

1.1324

-1.3611

-

bi

0.6719

1.7966

-0.1161

0.5977

-1.5442

-0.5259

-3.4411

3.9976

-

ci

-0.1491

-0.6467

0.2947

-0.089

1.1604

0.5155

2.5198

-3.0283

-

120

A Data Processing Algorithm for Digital 3D Motion Analysis

0.12

Displacement (m)

0.08

0.04

0.00 0.0

0.5

1.0

1.5

2.0

-0.04

-0.08

-0.12

Time (s) Figure 7. 17 spline-fitted curves for displacement of the steel ball for one cycle

0.65 PID processed Theoretical

0.45

Velocity (m/s)

Experimental 0.25

0.05 0.0

0.5

1.0

1.5

2.0

-0.15

-0.35

-0.55

Time (s) Figure 8. Comparison among PID-processed, experimental and theoretical velocities of the steel ball for one cycle

121

Tsui, Tang and Wong

4.

CONCLUSION

A PID data processing algorithm has been proposed to enhance the noise immunity of the digital motion control system so that smooth trajectory of the captured object can be produced in the form of spline-fitted displacement curves. Through the simple pendulum experiment, the algorithm has been successfully implemented to illustrate its applicability. With the integral process of the algorithm, the unwanted vibration of the raw data of displacement of the steel ball in the experiment could be eliminated, and hence only some mean values for corresponding time interval were generated by using central moving averaging method. Thus, the displacement of the simple pendulum could be represented by a smooth curve in which the mean-value data points were connected by spline curves. It can be found that the velocities of the simple pendulum computed from the raw data without data processing fluctuate significantly with time. By using the derivative process of the algorithm, the estimated velocity curve of the simple pendulum has high noise immunity as compared with the raw data. The proposed algorithm may be applied to determine the more complex displacement curve shape of the captured moving object.

ACKNOWLEDGEMENT The authors would like to thank Prof. Gabriel Ng from the Department of Rehabilitation Sciences of The Hong Kong Polytechnic University for providing us with the Vicon motion analysis system and his advice.

REFERENCES Bartels, Beatty, Barsky (1987) An Introduction to Splines for Use in Computer Graphics and Geometric Modeling. Brown M, Stanhope S (1995) “Preventing implementation and tracking error becoming clinical judgment errors in functional movement analysis”, Gait & Posture, Vol.3, pp.88. Durward BR, Baer GD, Rowe PJ (1999) Functional human movement: measurement and analysis. Oxford: Butterworth Heinemann. Ehara Y, Fujimoto H, Miyazaki S, Tanaka S, Yamamoto S (1995) “Comparison of the performance of 3D camera systems I”, Gait Posture, Vol.3, pp.166–169. Elliott BC, Alderson JA, Denver ER (2007) “System and modelling errors in motion analysis: implications for the measurement of the elbow angle in cricket bowling”, Journal of Biomehanics, In print. Liptak, B (1995) Instrument Engineers’ Handbook: Process Control: Radnor, Pennsylvania: Chilton Book Company. Richards J (1999) “The measurement of human motion: a comparison of commercially available systems”, Human Movement Science, Vol.18, 589-602.

122

Notes for Contributors Pre-review Manuscript Preparation The manuscript must be submitted in English, by E-mail or in triplicate by mail. Both paper format or electronic format should be as follows. Type on one side of A4 size (210mm x 297 mm) paper, double-spaced, with 25.4 mm margins left and right, top and bottom. Follow the standard composition given below and begin each component on a new page, with the page number typed in the upper, right hand corner of each page. I.

Title page: Page 1 should include: (a) the title of the article (Bold 14 point type size, Times New Roman); (b) the author’s full name [first name, middle initial(s), surname] (Capital 12 point type size, Times New Roman); (c) affiliations [department (if any), institution, city, state or country where the research is done] (Italic 10 point type size, Times New Roman); (d) acknowledgment of grant support and individuals who have directly helped in the study (10 point type size, Times New Roman). Abstract and Keywords: Page 2 should include the title of the article followed by an abstract not exceeding 200 words. The abstract should state the purpose of the study, basic procedures, most important findings, and principal conclusions, with an emphasis on the new aspects of the study. On a line below the abstract include no more than 5 keywords relating to the main topics of the paper for indexing. Main Text: The paper should be reasonably subdivided into sections and, if necessary, into subsections. Writing should be concise and in one column format throughout. Illustrations should be prepared on separate sheets and submitted with the manuscript. Figure legends, table headings, and table footnotes should be typed on separate sheets and appended to the manuscript. References: All sources cited in the text should be included in the reference list. A reference should include author’s name(s), year of publication, title, journal (or book and publisher), volume, issue number and page numbers. Journal Reference: Courtney, B.J. and Wong, H.J. 1993, Effects of cognitive load on a single-target detection task, Perceptual and Motor Skills, vol. 77, no. 3, pp. 515-533. Book Reference: Lo, H.H. and Law, H.C. 1996, Modern Engineering Systems, City Publishers, Hong Kong. Tables: Tables should be of a reasonable size. Each table and every column should be provided with an explanatory heading, with units of measure clearly indicated. The same data should not be reproduced in both tables and figures. Footnotes to a table should be indicated by superscript, lower case letters. Tables and illustrations (along with their footnotes or captions) should be completely intelligible without reference to the text. Figures: All graphs, photographs, diagrams and drawings should be referred to as Figures, and numbered consecutively in Arabic numerals. II. Where to Mail Pre-review Manuscript All manuscripts will be anonymously reviewed to evaluate the suitability and originality of the contents for publication. Please submit your manuscript by mail or email to the Editor-in-Chief, Dr. C.Y. Tang, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China. E-mail: [email protected]. III. Post-review Manuscript Preparation and Reprints: Please provide complete postal address, E-mail address, telephone and fax numbers. The manuscript should be revised with reference to the reviewers’ suggestions. Before publication the author will receive page proofs of their article. After publication, copies of the article as well as five complimentary copies of the issue in which the article appears will be sent to the article’s principal or sole author. Fees: Submission Fee: HK$500 / US$80

Page Charges: HK$50 per page / US$8 per page

Subscriptions The Industrial Engineering Research is normally published two times a year. Annual Subscription Fee: HK$200 per copy / US$40 per copy Subscribers outside Hong Kong are requested to pay by bank draft payable to the Institute of Industrial Engineers (Hong Kong) Ltd. All correspondence regarding subscriptions should be sent to: The Secretary, The Industrial Engineering Research, The Institute of Industrial Engineers (Hong Kong) Ltd., G.P.O. Box 6635, Hong Kong.

Suggest Documents