MOVING INTEGRATED PRODUCT DEVELOPMENT TO SERVICE CLOUDS IN THE GLOBAL ECONOMY
Advances in Transdisciplinary Engineering Advances in Transdisciplinary Engineering (ATDE) is a peer-reviewed book series covering the developments in the key application areas in product quality, production efficiency and overall customer satisfaction. ATDE will focus on theoretical, experimental and case history-based research, and its application in engineering practice. The series will include proceedings and edited volumes of interest to researchers in academia, as well as professional engineers working in industry. Editor-in-Chief Josip Stjepandić, PROSTEP AG, Darmstadt, Germany Advisory Board Richard Curran, TU Delft, The Netherlands Mike Sobolewski, TTU, Texas, USA Jianzhong Cha, Beijing Jiaotong University, China Shuo-Yan Chou, Taiwan Tech, Taiwan, China Amy Trappey, NTUT, Taiwan, China Cees Bil, RMIT University, Australia John Mo, RMIT University, Australia Kazuo Hiekata, University of Tokyo, Japan Milton Borsato, Federal University of Technology, Paraná-Curitiba, Brazil Parisa Ghodous, University of Lyon, France Rajkumar Roy, Cranfield University, UK Wim J.C. Verhagen, TU Delft, The Netherlands Wensheng Xu, Beijing Jiaotong University, China
Volume 1
ISSN 2352-751X (print) ISSN 2352-7528 (online)
Mov ving In ntegratted Pro oduct D Develo opment to Seervice Cloudds t Gllobal Econom E my in the Pro oceedings of o the 21st ISPE Inc. Internation I nal Confereence on Concu urrent Engin neering, Seeptember 8–11, 8 2014
Edited by y
Jian nzhong Cha C Beijing Jiaotong University, China
Shu uo-Yan Chou C Taiwan Tech, Taiwa an, China
ndić Josiip Stjepan STEP AG, Geermany PROS
Ricchard Currran TU Delft, The Neth herlands
and
Weensheng Xu Beijing Jiaotong University, China
Amstterdam • Berrlin • Tokyo • Washington, DC
© 2014 The authors and IOS Press. This book is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. ISBN 978-1-61499-439-8 (print) ISBN 978-1-61499-440-4 (online) Library of Congress Control Number: 2014949683 Publisher IOS Press BV Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected]
LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License.
v
Preface This book of proceedings contains papers peer reviewed and accepted for the 21st ISPE Inc. International Conference on Concurrent Engineering, held at the Beijing Jiaotong University, China, September 8–11th, 2014. This is the first issue of the newly introduced series “Advances in Transdisciplinary Engineering” which will publish the proceedings of the CE conference series. The CE conference series is organized annually by the International Society for Productivity Enhancement (ISPE Inc.) and constitutes an important forum for international scientific exchange on concurrent and collaborative enterprise engineering. These international conferences attract a significant number of researchers, industry experts and students, as well as government representatives, who are interested in the recent advances in concurrent engineering research, advancements and applications. Developed in the 80’s, the CE approach is based on the concept that different phases of a product life cycle should be conducted concurrently and initiated as early as possible within the Product Creation Process (PCP), including the implications within the extended enterprise. The main goal of CE is to increase the efficiency of the PCP and to reduce errors in the late phases, as well as full lifecycle and through-life operational considerations. In the past decades, CE has become the substantive basic methodology in many industries (automotive, aerospace, machinery, shipbuilding, consumer goods, process industry, environmental engineering) and is also adopted in the development of new services and service support. The initial basic CE concepts have matured and have become the foundations of many new ideas, methodologies, initiatives, approaches and tools. Generally, the current CE focus concentrates on enterprise collaboration and its many different elements; from integrating people and processes, to very specific complete multi/inter/transdisciplinary solutions. Current research on CE is driven again by many factors like increased customer demands, globalization, (international) collaboration and environmental strategies. The successful application of CE in the past opens also the perspective for future applications like overcoming natural catastrophes and sustainable mobility concepts with electrical vehicles. The CE2014 Organizing Committee has identified 27 thematic areas within CE and launched a Call For Papers accordingly, with resulting submissions submitted from all continents of the world. The conference is entitled: “Moving Integrated Product Development to Service Clouds in the Global Economy”. This title reflects the variety of processes and methods which influences the modern product creation. Finally the submissions as well as invited talks were collated into 12 streams led by outstanding researchers and practitioners. The Proceedings contains 88 peer-reviewed papers by authors from 22 countries and two invited keynote papers. These papers range from the theoretical, conceptual and to the strong pragmatic that contain industrial best practice. The involvement of more than 10 companies from many industries in the presented papers gives special importance for this conference.
vi
This book on ‘Moving Integrated Product Development to Service Clouds in the Global Economy’ is directed at three constituencies: researchers, design practitioners, and educators. Researchers will benefit from the latest research results and knowledge in the product creation processes and such related methodologies. Engineering professionals and practitioners will learn from the current state of the art in concurrent engineering practice, new approaches, methods, tools and their applications. The educators in the CE community disseminate and learn from the latest advances and methodologies for engineering curricula, and the community also encourages young educators to bring new ideas into the field. Part 1 of the Proceedings comprises the keynotes while part 2 is entitled “Product Lifecycle Management (PLM)” and addresses an advanced overview on new research and development in product lifecycle management (PLM) from many industry sectors. Part 3 contains the most contributions and outlines the importance of Knowledge Based Engineering (KBE) within CE which kind of methods to develop, and what the general approach in the product creation process for capturing and using this knowledge. In Part 4 a variety of the cloud approaches in manufacturing and service is highlighted while Part 5 addresses novel 3D printing applications. Part 6 focuses on design methods in the context of CE while Part 7 addresses specific educational methods and achievements within CE. Part 8 illustrates a number of key-topics on simulation of complex systems and Part 9 deals with the broad variety of use cases for the systems engineering of complex products. Finally, Part 10 highlights various aspects of services as innovation and science with manifold applications, Part 11 recalls sustainability as basic requirement of engineering activity, and Part 12 comprises the recent research on open innovation in CE, together with some associated strategic directives. We acknowledge the high quality contributions of all authors to this book, and the work of the members of the International Program Committee who assisted with the blind triple peer-review of the original papers submitted and presented at the conference. You are sincerely invited to consider all of the contributions made by this year’s participants through the presentation of CE2014 papers collated into this book of proceedings, in the hope that you will be further inspired in your work by giving your ideas for new approaches for sustainable product development in a multi-disciplinary environment within the ISPE Inc. community. Jianzhong Cha, General Chair Beijing Jiaotong University, China Shuo-Yan Chou, Co-General Chair Taiwan Tech, Taiwan, China Josip Stjepandić, Program Chair PROSTEP AG, Germany Richard Curran, Co-Program Chair TU Delft, The Netherlands Wensheng Xu, Secretary General Beijing Jiaotong University, China
vii
Conference Organization Program Committee General Chairs: Jianzhong Cha, Beijing Jiaotong University Shuo-Yan Chou, Taiwan Tech Program Chair: Josip Stjepandić, PROSTEP AG Richard Curran, TU Delft Liping Fang, Ryerson Polytechnic University Local Chair: Jianyong Li, Beijing Jiaotong University Zhiming Liu, Beijing Jiaotong University ISPE Steering Committee Richard Curran, TU Delft, The Netherlands (ISPE Inc. President) Mike Sobolewski, TTU, Texas, USA (ISPE Inc. Vice President) Georg Rock, Trier University of Applied Science, Germany Essam Shehab, Cranfield University, UK Jianzhong Cha, Beijing Jiaotong University, China Shuo-Yan Chou, Taiwan Tech, Taiwan, China Josip Stjepandić, PROSTEP AG, Germany Amy Trappey, NTUT, Taiwan, China Shuichi Fukuda, Stanford University, USA Cees Bil, RMIT University, Australia Chun-Hsien Chen, Nanyang Technological University, Singapore Eric Simmon, NIST, USA Fredrik Elgh, Jönköping University, Sweden John Mo, RMIT University, Australia Jerzy Pokojski, SIMR, Poland Kazuo Hiekata, University of Tokyo, Japan Milton Borsato, Federal University of Technology, Paraná-Curitiba, Brazil Parisa Ghodous, University of Lyon, France Ricardo Gonçalves, UNINOVA, Portugal Geilson Loureiro, INPE, Brazil Ahmed Al-Ashaab, Cranfield University, UK Nel Wognum, Wageningen University, Netherlands Rajkumar Roy, Cranfield University, UK
viii
International Program Committee Carlos Agostinho, UNINOVA, Portugal Ahmed Al-Ashaab, Cranfield University, UK Mushtak Al-Atabi, Taylors University, Malaysia Ronald Beckett, University of Western Sydney, Australia Alain Biahmou, EDAG GmbH & Co. KGaA, Germany Cees Bill, RMIT, Australia Volker Böß, University of Hannover, Germany Milton Borsato, Federal University of Technology - Paraná, Brazil Osíris Canciglieri Junior, Pontifical Catholic University of Paraná, Brazil Jianzhong Cha, Beijing Jiaotong University, China Chun-Hsien Chen, NTU, Singapore Xin Chen, Guangdong University of Technology, China Ming-Chuan Chiu, National Tsing Hua University, Taiwan, China Shuo-Yan Chou, Taiwan Tech, Taipei, China Adina Georgeta Cretan, “Nicolae Titulescu” University of Bucharest, Romania Richard Curran, TU Delft, The Netherlands Evelina Dineva, German Aerospace Center (DLR), Germany Jože Duhovnik, University of Ljubljana, Slovenia Mingcheng E, Beijing Jiaotong University, China Fredrik Elgh, Jönköping University, Sweden Daniela Faas, Harvard University, USA Liping Fang, Ryerson Polytechnic University Catarina Ferreira Da Silva, LIRIS, CB University of Lyon 1,France Alain-Jérôme Fougères, Universite de Technologie de Belfort – Montbeliard, France Shuichi Fukuda, Stanford University, USA Giuliani Garbi, College Anhanguera of São José, Brazil Parisa Ghodous, University of Lyon, France Ricardo Goncalves, UNINOVA, Portugal Kazuo Hiekata, University of Tokyo, Japan Masato Inoue, Meiji University, Japan Teruaki Ito, University of Tokushima, Japan Zengqiang Jiang, Beijing Jiaotong University, China Roger Jiao, Georgia Tech, USA Joel Johansson, Jönköping University, Sweden Leonid Kamalow, Ulyanovsk State Technical University, Russia Milan Kljajin,University of Osijek, Croatia Ben Koo, Tsinghua University, China Pisut Koomsap, Asian Institute of Technology, Thailand Bohu Li, Beihang University, China Nan Li, Beijing Technology and Business University Yuanyuan Liu, Shanghai University, China Geilson Loureiro, INPE, Brazil Wenfeng Lu, National University of Singapore, Singapore Zoran Lulić, University of Zagreb, Croatia Nils Macke, ZF AG, Germany Ivan Mahalec, University of Zagreb, Croatia
ix
Ming Mao, China North Vehicle Research Institute, China Nozomu Mishima, AIST, Japan Maria Lucia Miyake Okumura, Pontifical Catholic University of Parana, Brazil John Mo, RMIT University, Australia Egon Ostrosi, Universite de Technologie de Belfort - Montbeliard, France João Adalberto Pereira, COPEL Companhia Paranaense de Energia, Brazil Margherita Peruzzini, Università Politecnica delle Marche, Italy Jerzy Pokojski, SIMR, Poland Jianjun Qin, Beijing University of Construction Engineering and Architecture, China Georg Rock, University of Applied Science Germany Rajkumar Roy, Cranfield University, UK Joao Sarraipa, UNINOVA, Portugal Essam Shehab, Cranfield University, UK Gang Shen, Huazhong University of Science and Technology, China Jianjun Shi, Georgia Tech, USA Pekka Siltanen, VTT, Finland Eric Simmon, NIST, USA Michael Sobolewski, TTU, USA Jing Ge Song, Beijing Jiaotong University, China Josip Stjepandić, PROSTEP AG, Germany Jingyu Sun, The University of Tokyo Goran Šagi, University of Zagreb, Croatia Blaženko Šegmanović, ThyssenKrupp AG, Germany Derrick Tate, Xijiao-Liverpool University, China Jože Tavčar,University of Ljubljana, Slovenia Amy Trappey, National Tsinghua University, Taiwan, China Wim Verhagen, TU Delft The Netherlands Nel Wognum, Wageningen University, The Netherlands Shijing Wu, Wuhan University, China Wensheng Xu, Beijing Jiaotong University Xun Xu, University of Auckland, New Zealand Guofu Yin, Sichuan University, China Xiaojia Zhao, TU Delft, The Netherlands Yongmin Zhong, RMIT Australia Xiaomin Zhu, Beijing Jiaotong Unversity, China Organizers International Society for Productivity Enhancement Beijing Jiaotong University
x
Past Concurrent Engineering conferences 2013: Melbourne, Australia 2012: Trier, Germany 2011: Boston, USA 2010: Cracow, Poland 2009: Taipei, Taiwan 2008: Belfast, UK 2007: São José dos Campos, Brazil 2006: Antibes-Juan les Pins, France 2005: Dallas, USA 2004: Beijing, China 2003: Madeira, Portugal 2002: Cranfield, UK 2001: Anaheim, USA 2000: Lyon, France 1999: Bath, UK 1998: Tokyo, Japan 1997: Rochester, USA 1996: Toronto, Canada 1995: McLean, USA 1994: Pittsburgh, USA
xi
Sponsors International Society for Productivity Enhancement
Beijing Jiaotong University
Beijing Jiaotong University Chinese Academy of Engineering
National Natural Science Foundation of China
Chinese Mechanical Engineering Society
xii
IOS Press
PROSTEP AG
xiii
Contents Preface Jianzhong Cha, Shuo-Yan Chou, Josip Stjepandić, Richard Curran and Wensheng Xu
v
Conference Organization
vii
Sponsors
xi
Part I Keynotes Unifying Front-End and Back-End Federated Services for Integrated Product Development Michael Sobolewski In a Network-Centric World John C. Hsu Smart Cloud Manufacturing (Cloud Manufacturing 2.0) – A New Paradigm and Approach of Smart Manufacturing Bo Hu Li, Lin Zhang and Xudong Chai
3 17
26
Breakthrough Innovation in Higher Education Stephen Zhi-Yang Lu
27
Concurrent Engineering with Internet of Things: An Extreme Learning Approach Benjamin Koo, Jianzhong Cha and Shuo-Yan Chou
28
Network-Centric Manufacturing: Making It Happen Ram D. Sriram
29
Part II Product Lifecycle Management Product Development Model for Application in R&D Projects of the Brazilian Electricity Sector João Adalberto Pereira, Osíris Canciglieri Júnior and Ana Maria Antunes Guimarães A Proposal on a Remote Recycling System for Small-Sized E-Waste Nozomu Mishima, Kenta Torihara, Kiyoshi Hirose and Mitsutaka Matsumoto A Value Creation Based Business Model for Customized Product Service System Design Yu-Ting Chen and Ming-Chuan Chiu Composite Aircraft Components Maintenance Cost Analysis Xiaojia Zhao, Massoud Urdu, Wim J.C. Verhagen and Richard Curran
33
46
54 64
xiv
Assessing the Requirements and Viability of Distributed Electric Vehicle Supply John P.T. Mo A Model for Storing and Presenting Design Procedures in a Distributed ServiceOriented Environment Oleg Kozintsev, Alexander Pokhilko, Leonid Kamalov, Ivan Gorbachev and Denis Tsygankov Life Cycle Costing for Alternative Fuels Tim Conroy and Cees Bil
74
84
92
Case Studies for Concurrent Engineering Concept in Shipbuilding Industry Kazuo Hiekata and Matthias Grau
102
The Sources and Methods of Engineering Design Requirement Xuemeng Li, Zhinan Zhang and Saeema Ahmed-Kristensen
112
A Method for Identifying Product Improvement Opportunities Through Warranty Data Marcio R. Bueno and Milton Borsato A Closed-Loop PLM Model for Lifecycle Management of Complex Product Wei Guo, Qing Zheng, Bin Zuo and Hong-yu Shao
122 132
Part III Knowledge-Based Engineering A Knowledge-Based Approach for Facilitating Design of Curved Shell Plates’ Manufacturing Plans Jingyu Sun, Kazuo Hiekata, Hiroyuki Yamato, Norito Nakagaki and Akiyoshi Sugawara Using Patent Co-Citation Approach to Explore Blu-Ray Technology Classifications Yu-Hui Wang, Pin-Chen Kuo and Tzu-Han Chow
143
153
A Multi-Agent Approach to the Maximum Weight Matching Problem Gang Shen and Yun Zhang
162
Model of Product Definition for Meeting the RoHS Directive José Altair Ribeiro Dos Santos and Milton Borsato
172
Application of Knowledge-Based Engineering in the Automobile Panel Die Design Jiafu Wen, Wei Guo and Zhenhai Wang
182
Knowledge Object – a Concept for Task Modelling Supporting Design Automation Fredrik Elgh and Joel Johansson
192
Design Rationale Management – A Proposed Cloud Solution Joel Johansson, Morteza Poorkiany and Fredrik Elgh
204
Semantic Modeling of Dynamic Extended Companies Kellyn Crhis Teixeira and Milton Borsato
215
xv
Human Expertise as the Critical Challenge in Participative Multidisciplinary Design Optimization – An Empirical Approach Evelina Dineva, Arne Bachmann, Uwe Knodt and Björn Nagel Word Segmentation Algorithm on Procedure Blueprint Jianbin Liu, Duo Yao and LiRui Yao Differentiated Contribution of Context and Domain Knowledge to Products Lines Development German Urrego-Giraldo and Gloria Lucía Giraldo G.
223 233
239
Part IV Cloud Manufacturing and Service Clouds A Hierarchical Method for Coupling Analysis of Design Services Nan Li, Wensheng Xu and Jianzhong Cha Intelligent Utilization of Digital Manufacturing Data in Modern Product Emergence Processes Regina Wallis, Josip Stjepandic, Stefan Rulhoff, Frank Stromberger and Jochen Deuse
251
261
A Computing Resource Selection Approach Based on Genetic Algorithm for Inter-Cloud Workload Migration Tahereh Nodehi, Sudeep Ghimire and Ricardo Jardim-Goncalves
271
Research on Software Resource Sharing Management in Collaborative Design Environment Based on Remote Virtual Desktop Wensheng Xu, Nan Li, Hong Tang and Jianzhong Cha
278
A Lean Manufacturing Implementation Strategy and Its Model for Numerical Control Job Shop Under Single-Piece and Small-Batch Production Environment Ao Bai, Ping Xia and Liang Zeng
287
Uncertainties in Cloud Manufacturing Yaser Yadekar, Essam Shehab and Jorn Mehnen
297
Service-Oriented Architecture for Cloud Application Development Hind Benfenatki, Gavin Kemp, Catarina Ferreira Da Silva, Aïcha-Nabila Benharkat, Parisa Ghodous
307
Extending BPMN for Configurable Process Modeling Hongyan Zhang, Weilun Han and Chun Ouyang
317
Part V 3D Printing 3D Printing, a New Digital Manufacturing Mode Chagen Hu and Guofu Yin Combining 3D Printing and Electrospinning for the Fabrication of a Bioabsorbable Poly-p-dioxanone Stent Yuanyuan Liu, Ke Xiang, Yu Li, Haiping Chen and Qingxi Hu
333
343
xvi
Optimization of Process Parameters for Biological 3D Printing Forming Based on BP Neural Network and Genetic Algorithm Zhenglong Jiang, Yuanyuan Liu, Haiping Chen and Qingxi Hu Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold by 3D Bio-Printing Technology Yanan Zhang, Yuanyuan Liu, Haiping Chen, Zhenglong Jiang and Qingxi Hu Opportunities and Challenges of Industrial Design Brought by 3D Printing Technology Na Qi, Xun Zhang and Guofu Yin
351
359
369
Part VI Design Methods Design for Assembly in Series Production by Using Data Mining Methods Ralf Kretschmer, Stefan Rulhoff and Josip Stjepandić The Analysis of Axial Slippage of the Sleeve in Circuit Breaker Operating Mechanism Wu Shijing, Zhang Haibo, Zhang Zenglei and Zhao Wenqiang Product Development Supported by MFF Application Ivan Vidović, Mirko Karakašić, Milan Kljajin, Jožef Duhovnik and Željko Hočenski City-Product Service System: a Multi-Scale Intelligent Engineering Design Approach ZaiFang Zhang, Egon Ostrosi, Alain-Jérôme Fougères, Jean-Bernard Bluntzer, Yuan Liu, Fabien Pfaender and MonZen Tzen Modularity: New Trends for Product Platform Strategy Support in Concurrent Engineering Egon Ostrosi, Josip Stjepandić, Shuichi Fukuda and Martin Kurth Managing Fluctuating Requirements by Platforms Defined in the Interface Between Technology and Product Development Samuel Andrè, Roland Stolt, Fredrik Elgh, Joel Johansson and Morteza Poorkiany
379
389 397
405
414
424
Intelligent Engineering Design of Complex City: A Co-Evolution Model Bin He, Egon Ostrosi, Fabien Pfaender, Alain-Jérôme Fougères, Denis Choulier, Bruno Bachimont and MonZen Tzen
434
A Closed-Loop Based Framework for Design Requirement Management Zhinan Zhang, Xuemeng Li and Zelin Liu
444
Part VII Concurrent Engineering Education Tools and Methods Stimulate Virtual Team Co-operation at Concurrent Engineering Jože Tavčar and Jožef Duhovnik
457
xvii
Educating for Transcultural Design Derrick Tate Framework of Concurrent Design Facility for Aerospace Engineering Education Based on Cloud Computing Dajun Xu, Cees Bil and Guobiao Cai Experience with Master Theses Ran as Projects Jean Pierre Tollenboom
467
477 485
Part VIII Simulation of Complex Systems Simulation on the Combustion System Work Process for Internal Combustion Engine by Using KIVA-3V Yan Shi, Yongfeng Liu, Xiaoshe Jia, Pucheng Pei, Yong Lu and Li Yi A Concurrent Simulation Framework of Power Plant for Online Fuel Analysis Based on GRNN Neural Network Jingge Song, Xiaochao Ma, Boshu He, Linbo Yan, Xiaohui Pei and Chaojun Wang
497
506
Stability Analysis and Optimal Design of Super-Power Hydraulic Operating System Zenglei Zhang, Shijing Wu, Wenqiang Zhao, Qiwei Lai and Jicai Hu
515
Energy Utilization Modeling and Simulation for Pulp and Paper Manufacturing Processes Yitao Liu and Roger J. Jiao
525
Part IX Systems Engineering Shared Management of Product Portfolio Giuliani Paulineli Garbi and Geilson Loureiro
537
Towards Self-Evolutionary Cyber Physical Systems Sudeep Ghimire, Fernando Luis-Ferriera, Ricardo Jardim-Goncalves and Tahereh Nodehi
547
An FPGA Based Architecture for Concurrent System Design Applied to Human-Robot Interaction Applications Lin Zhang, Peter Slaets and Herman Bruyninckx Set-Based Concurrent Engineering for Early Phases in Platform Development Christoffer Levandowski, Dag Raudberget and Hans Johannesson A Requirements Engineering Methodology for Technological Innovations Assessment Elsa Marcelino-Jesus, Joao Sarraipa, Carlos Agostinho and Ricardo Jardim-Goncalves Standardized Approach to ECAD/MCAD Collaboration Christian Emmer, Arnulf Fröhlich, Volker Jäkel and Josip Stjepandić
555 564
577
587
xviii
Interoperability of Simulation Applications for Dynamic Network Enterprises Based on Cloud Computing – Aeronautics Application Anaïs Ottino, Parisa Ghodous, Hamid Ladjal, Behzad Shariat and Nicolas Figay Configuration Optimization of Additive Manufacturing Based Supply Chain Using Simulation Approach Yi-Hsuan Lin, Yu-Ting Chen and Ming-Chuan Chiu SysML-Based Model Driven Discrete-Event Simulation Yitao Liu, Prashanth Irudayaraj, Feng Zhou, Roger J. Jiao and Joseph N. Goodman Sensors and Simulation Cooperative Module Based Information Management Command System in Mine Dynamic Disaster Prevention Daning Wang, Andreas Rausch, Sheng Li, Hongwei Zhang and Junwen Li Risk Management in the Design of Engineering as Sociotechnical Systems Bryan R. Moser, Ralph T. Wood and Kazuo Hiekata
597
607 617
627 635
Part X Service Science and Engineering, Service Innovation Readiness for Operation During Transition in Global Enterprise Sergej Bondar and Josip Stjepandić
649
Enhancing Parking Service Design by Service Blueprint Approach Ching-Hung Lee, Yu-Hui Wang and Amy J.C. Trappey
659
Old Stuff and New Combinations in Product-Service Bundling Ronald C. Beckett
668
Integrating Music Therapy and Music Information Retrieval Using Music Pattern Analysis Li-Wei Ko, Yu-Ting Chen and Ming-Chuan Chiu
678
Investigating the Relationship Between Therapeutic Music and Emotion: A Pilot Study on Healthcare Service Ya-Wen Hsu, Ming-Chuan Chiu and Sheue-Ling Hwang
688
A Mass Personalization Methodology Based on Co-Creation Wen-Pin Hsiao and Ming-Chuan Chiu
698
Stochastic Forecasting of Lumpy-Distributed Aircraft Spare Parts Demand Wim J.C. Verhagen and Richard Curran
706
Addressing Product-Service Manufacturing in Globalised Markets: An Industrial Case Study Margherita Peruzzini and Eugenia Marilungo Automatic Detection of Harmonization Breaking in SOA-based Enterprise Networks Carlos Raposo, Carlos Agostinho, Jose Ferreira and Ricardo Jardim-Goncalves
716
726
xix
Critical Factors in Successful Performance Based Contracting Environment Trevor Byrne and John P.T. Mo
736
Maintaining High Reliability Service in the Transformation to a Service Dominant Product Service System Marcus Zeuschner and John P.T. Mo
746
The Modular Affordance Deployment Method for Module Clustering Process of the Integrated Service Generalized Product Chunlong Wu, Yangjian Ji, Wenbo Lu, Guoning Qi and Xinjian Gu
756
Functional and Ecosystem Requirements to Design Sustainable Product-Service Margherita Peruzzini, Eugenia Marilungo and Michele Germani
768
Part XI Sustainable System Intelligent and Concurrent Analytic Platform for Renewable Energy Policy Assessment Using Open Data Resources Danny Y.C. Wang, Amy J.C. Trappey, Charles V. Trappey and S.J. Li
781
Internet of Things for eHealth in a Physiologic and Sensorial Perspective Supported by the Cloud Fernando Luis-Ferreira, Sudeep Ghimire and Ricardo Jardim-Goncalves
790
Selecting Renewable Energy Technology via a Fuzzy MCDM Approach Luu Quoc Dat, Shuo-Yan Chou, Nguyen Truc Le, Evina Wiguna, Tiffany Hui-Kuang Yu and Phan Nguyen Ky Phuc
796
Short Time Forecast of Wind Speed Based on EMD and SVM Yancai Xiao, Chunya Li and Peng Wang
806
Synchronizing Structural Health Monitoring with Scheduled Maintenance of Aircraft Composite Structures Xi Chen, He Ren, Cees Bil and Hongwei Jiang
813
Part XII Open Innovation Strategic Development of LTE Mobile Communication Technology Based on Patent Map Analysis Amy J.C. Trappey, Lynn W.L. Chen, Juice Y.C. Chang and Mike F.M. Yeh
825
Research of Context Requirement Analysis Method for Customer Collaborative Design Jianjun Qin, Yan-an Yao and Jianzhong Cha
834
Greedy Dynamic Programming for Scheduling the Advanced Reservation Parking Demands Shuo-Yan Chou, Phan Nguyen Ky Phuc, Vincent F. Yu and Shih-Wei Lin
846
Applying Simulated Annealing to the Nurse Rostering Problem in an Emergency Department Shih-Wei Lin, Yueh-E. Lee, Li-Chen Chen, Her-Kun Chang and Chih-Feng Lin
852
xx
Exploration of a Concept Screening Method in a Crowdsourcing Environment Danni Chang and Chun-Hsien Chen
861
Classification of the Open Innovation Practices: The Creativity Level Murilo Agio Nerone, Osiris Canciglieri Junior and Yongxin Liao
871
Security Model and Analysis of Digital Products Online Logistics Junjie Lv, Chen Zhao, Min Yu, Shuangshuang Sun and Mingke He
880
Predicting Product Adoption in Large Social Networks for Demand Estimation Feng Zhou, Yangjian Ji and Roger J. Jiao
890
Subject Index
901
Author Index
905
Part I Keynotes
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-3
3
Unifying Front-end and Back-end Federated Services for Integrated Product Development Michael SOBOLEWSKI Air Force Research Laboratory, WPAFB, Ohio 45433 Polish Japanese Institute of IT, 02-008 Warsaw, Poland
Abstract. Improvements in the design and manufacturing processes, and the related technologies that enable them, have led to significant improvements in product functionality and quality. However, the need for further improvements in these areas is needed due to increasing complexity of integrated product process development (IPD). Introduction of a new IPD project is more complex than most people realize and getting more complex all the time. Some of the complexity is due to rapidly changing and advancing technologies in underlying hardware and software, and the interplay of individual complex methods in system configurations. A strong IPD methodology, with intrinsically higher fidelity models to actualize the agile service-oriented design/manufacturing processes, is needed which can be continuously upgraded and modified. This paper describes a true service-oriented architecture that describes everything, anywhere, anytime as a service with the innovative service-oriented process expression (front-end services called exertions) and its dynamic and on-demand actualization (back-end service providers). Domain-specific languages (DSLs) for modeling or programming or both (mogramming) are introduced and their unifying role of front/back-end services is presented. Moving to the back-end of IPD systems front-end process expressions, that are easily created and updated by the end users, is the key strategy in reducing complexity of large-scale IPD systems. It allows for process expressions in DSLs to become directly available as back-end service providers that normally are developed by experts and software developers that cope constantly with the compatibility, software, and system integration issues that become more complex all the time. Keywords. SORCER, SOA, SOOA, exertions, var-models, service-oriented mogramming, IPD, concurrent engineering
Introduction The increase in complexity of integrated product development (IPD) systems is directly related to sweeping changes in the structure and dynamics of human creativity, increasing competiveness, and interdependence of the global economic and social system. Complexity of existence has increased and is increasing, therefore the development of robust and optimal products and processes in today’s environment of step-by-step reductions in cycle time, cost take-out, and improved performance, diminishes the capabilities of today’s design systems, which directly impacts life cycle costs. Since complex products are designed, manufactured, and serviced at
4
M. Sobolewski / Unifying Front-End and Back-End Federated Services
geographically disparate locations, the need to improve IPD of always moving, changing, and adopting product data and business logic incorporating has to be constantly reevaluated [1]. Therefore, the requirement for a federated service-oriented architecture, which exploits the concept of front/back-end services and permits contextaware views of composite processes is required to seamlessly integrate relevant technologies to enable rapid instantiation and simulation-based evaluations of products and processes, with the best-in-class applications, tools, and utilities as services. As a result, the methodology of product development needs to be changed. A strong, dependable IPD methodology with higher fidelity models to perform the conceptual design and compute the information required for the modeling and simulation analysis has to be considered which can be continuously upgraded and modified. Such a methodology should lead to a significant reduction in cost and development time without scarifying any of the desired product specifications. Moreover, it should be simple to comprehend, easy to implement and easily adaptable to a diverse nature of product development activities. Transdisciplinary concurrent engineering (TCE) is the approach, which provides all the above capabilities, and it can prove to be the agile service-oriented solution unifying front/back-end services. Moreover, it embodies the belief that quality is inbuilt in the product, and that it (quality) is a result of continuous improvement of a federated service-oriented process. The TCE system envisages providing a whole range of software tools and services that will support an economical and an optimum product design. In addition to a multitude of CAD/CAE/CAM tools, there will be a host of other front-end tools for programming, modeling, project management, process planning etc. Networked product developers may use different platforms appropriate for their tasks. In a general case, one developer can use a collaborative federation of services, and there is a need to use the best-in-class engineering applications, tools, and utilities running under different operating systems in the network. On the other hand, the coordination of complex tasks involving many humans and a long series of interactions requires a homogeneous operating system—a kind of service-oriented metaoperating system [2]. The metaoperating system enables distributed collaborative analysis and hierarchical design space explorations. Creative at runtime front-end integration of resources used by a product developer directly is a key enabler for performing higher fidelity designs. In the Service-ORiented Computing EnviRonment (SORCER), such a metaoperating system is called the SORCER Operating System (SOS). The SOS consists of the collection of distributed service providers as network modules for interpreting and executing front-end services, called exertions, by creating, provisioning, and managing federations of back-end service providers at runtime. Roughly speaking, the SOS, through its system services, provides connectivity, location transparency and network-wide access in the SORCER heterogeneous service environment [3]. The service-oriented process expression (front-end) and its actualization (backend) of the SORCER computing platform enables collaborative design across organizational boundaries and full usage of all compute resource in the network ranging from desktops to high performance computing machines. This is the key to executing the process within the same amount of time and resources as a traditional conceptual design process. The SORCER service-oriented architecture describes everything, anywhere, anytime as a service.
M. Sobolewski / Unifying Front-End and Back-End Federated Services
5
This paper introduces the SORCER platform that provides a service-oriented modeling and/or programming (mogramming) environment with its operating system that runs front-end services (process expressions) and dynamically manages corresponding back-end federations of local and remote service providers [3]. A layered view of SORCER services is depicted in Fig. 1. Three types of front/back-end unification that allows for moving to back-end of the IPD system front-end process expressions created and updated easily by the end users are presented in the following three Sections.
Figure 1. The layered view of basic SORCER front-end services: contexts, exertions, and models and backend service providers with its operating system (SOS).
1. Unification of Service Data and Control: Service Contexts In SORCER, data as service (DaaS) and control as a service (CaaS), are based on the concept that data and control strategy can be provided on demand to the service requestor or service provider regardless of geographic or organizational separation of provider and requestor. Additionally, the emergence of SORCER operating system (SOS) has rendered the actual platform on which the data resides irrelevant. This approach has enabled the service-oriented programming and modeling with a concept of service context as a form of interoperable dynamic associative memory as a service. Traditionally, most enterprises have used data stored in a self-contained repository, for which software was specifically developed to access and present the data in a human-readable form. One result of this paradigm is the bundling of both the data and the software needed to interpret it into a single package. As the number of bundled software/data packages proliferated and required interaction among one another, next layer of interface was required. These interfaces, collectively known as enterprise application integration (EAI), often tended to encourage vendor lock-in, as it is generally easy to integrate applications that are built upon the same foundation
6
M. Sobolewski / Unifying Front-End and Back-End Federated Services
technology. The result of the combined software/data consumer package and required EAI middleware has been an increased amount of software for organizations to manage and maintain, simply for the use of particular data. An exertion is a service-oriented process expression in exertion-oriented language (EOL) that specifies a service federation created at runtime by the corresponding operating system [4]. A task exertion (or simply a task) is an elementary service provided by a single service provider. A batch task (or simply a batch) is a concatenation of elementary tasks with a shared service context. A job exertion (or simply a job) is a service composition that represents a hierarchically organized collaborative service federation (workflow). A block exertion (or simply a block) is a concatenation of exertions having common block scope for its control flow. The exertion's data called service context describes the data that tasks, batches, jobs, and blocks work on and create. A data context, or simply a context, is a data structure that describes a service provider’s namespace along with related data. Conceptually a data context is similar in structure to a file system, where paths refer to objects instead files. A provider‘s namespace (object paths) is controlled by the provider vocabulary (attributes) that describes data structures in a provider's namespace within a specified service domain of interest. A requestor submitting an exertion to a provider has to comply with that namespace as it specifies how the context data is interpreted and used by the provider independently where the data is coming from. A control context is a specialization of service context for defining a control strategy for executing exertions by the SOS. A service parameter (for short a par) is a special kind of variable, used in service contexts to refer to one of the named pieces of data to a service used as either the passive value or the active value. The active value is the value calculated by a par’s procedural attachment called an invoker. A service variable (var) is a collection of triplets: { }, called var fidelities, where: 1. An evaluator is a service with the argument vars that define the var dependency chain. 2. A getter is a pipeline of filters processing and returning the result of evaluation. 3. A setter assigns a value that is a quantity filtered out from the output of the current evaluator. Collections of pars and vars within a service context constitute par-models and var-models that can be used in exertions as data or as standalone modeling service providers. Var-models are instances of the VarModel class which subclasses from the ParModel class (see Fig. 2). Therefore all functionality of service contexts and parmodels is inherited by var-models. Invokers of par-models are used as procedural attachments for both par-models and var-models. In particular var-models can be reconfigured at runtime as needed by their related pars, for example to update fidelities of vars at runtime. In EOL a service signature is a handle to a service provider that determines a service invocation on the provider [5]. The signature usually includes the service type, operation of the service type, and expected quality of service (QoS). While exertion's signatures identify (match) the required collaborating providers in service federations, the control context defines for the SOS a strategy how and when the signature operations are applied to the data context. The collaboration specifies a collection of cooperating providers—the exertion federation—identified by all nested signatures of
M. Sobolewski / Unifying Front-End and Back-End Federated Services
7
the exertion. Exertions encapsulate explicitly data, operations, and control strategy for the collaboration. The signatures are dynamically bound to corresponding service providers—members of the exerted collaboration. A service context (either data or control) can be specified in exertions explicitly by the service requestor or can be referenced by the requestor (using append signatures) to any combination of context providers called contexters that append requested runtime data as specified by provided patterns in exertion’s data contexts.
Figure 2. Top-level Java interfaces of the SORCER programming and modeling environment.
All SORCER service contexts: data context, control context, and modeling contexts (par-model and var-model) implement the Context interface as the common interoperability structure for system services, application services, and third party context-aware services (see Fig. 2). This commonality provides for context-awareness in service-oriented mogramming and wide-open standardized data transfer between service requestors, providers, the SOS, and third party services in the SORCER expanded environment. The same Context interface provides for data unification of front-end (process expression) and back-end (process actualization) of all services (exertions and service providers). Context-aware communication and computing allows continuous adaptation of collaborative service federations to the constantly changing distributed service contexts specifying runtime data, control strategies, and service configurations. Hierarchically organized context data in exertions is the information characterizing the situation of a participating entity in the federation and providing information about the present status to federating members in the constantly changing environment. An entity is a person or service relevant to the collaboration between the users and service providers that depend on the current state of exertion contexts including those shared and persisted in the network. Context awareness enables customization or creation of the federated
8
M. Sobolewski / Unifying Front-End and Back-End Federated Services
applications that match the preferences of the individual user and participating services based on current hierarchically organized context for complex adaptive analyses or space exploration problems. Exertions with signatures of the append type (DATA_APD or CONTROL_APD) can update their current contexts from collaborating data/control-oriented services or accept relevant default values at runtime. In particular, control context awareness in SORCER is related to control flow and asynchronous execution expressed by control context of exertions. Parallel (Flow.PAR) or sequential (Flow.SEQ) control flow of job exertions, synchronous (Access.PUSH) or asynchronous (Access.PULL) access to service providers, or provisioning new services (Provision.YES) can be updated by the requestors or collaborating providers at runtime depending on availability and state of the currently executing service federation. On the one hand, the modeling context awareness in par-oriented modeling allows for preferred use of procedural attachment to update data/control contexts and to reconfigure var models. On the other hand, the modeling context awareness allows for preferred choices of var fidelities in var-models adjusted at runtime to corresponding computation resources and strategies used by var evaluators. Context awareness in SORCER can be used quite differently under different conditions, and layers, such as selecting preferred service providers and models in federations, proxy registration updates, currently used provider’s wire protocols, leasing resources and transaction management, network garbage collection, and security preferences. With uniform interoperability of context-aware data and control strategies across the SORCER environment, the SOS manages complex structured and behavioral dependencies and makes its service federations self-aware of adaptivity to a changing computing environment by interpreting all contexts across every service federation as active distributed associative memory. The managed structured (configuration) dependencies by the SOS refer to nested compositions of exertions. The SOS manages the behavioral (execution) dependencies as follows [3]: 1. Control contexts in exertions 2. Calling an executable code 3. Calling a method on an object 4. Calling a service. x invokers of a par-model (invocation processor). x evaluators, getters, and setters of var fidelities (evaluation processor). x service providers (subclasses of the ServiceProvider class). x service beans (components of service providers). A service container is configured for deployment/provisioning [6] by dependency injection with a corresponding deployment context specified in a configuration file. This context configures basic properties of a provider including its service beans, object proxy, wire protocol, thread pools, exertion space connectivity, security properties, proxy verifier, etc. A number of deployment parameters can be updated at runtime or the whole context can be updated as needed for a provider to be reprovisioned dynamically for a new deployment configuration. A service container (ServiceProvider in Fig. 2) allows for deploying service beans that implement service types as configurable service providers. In particular, service contexts, exertions, and par/var models are service beans so can be directly deployed as providers in the engineering/manufacturing application service cloud. Therefore frontend services specified in DSLs can be used to deploy back-end service providers. In
M. Sobolewski / Unifying Front-End and Back-End Federated Services
9
Fig. 3 the same exertion is used as a front-end service E-fe (E-fe is executed by the SOS shell) and a back-end exertion E-be (a copy of of E-fe ) is executed by exerting the task exertion T-fe. In that case, the provider SP6 managing the bean E-be creates the same federation as the SOS shell for executing F-fe.
Figure 3. A front-end exertion E-fe is executed directly by the SOS shell and another its instance E-be is deployed as the service provider SP6. The provider SP6 can be exerted with a front-end task T-fe that executes the same way as direct execution of the front-end exertion E-fe.
2. Unification of Local and Remote Services: Service Signatures and Exertions Herein, the context-aware computing philosophy defines an exertion as a mapping with the property that a single service input context is related to exactly one output context. A context is a dictionary composed of path-value pairs, i.e., associations, such that each path referring to its value appears at most once in the context. Everything, which has an independent existence, is expressed in EOL as an association, and relationships between them are modeled as data contexts. Additional properties with a context path can be specified giving more specific meaning to the value referred by its path. The context attributes form a taxonomic tree, similar to the relationship between directories in file systems. Paths in the taxonomic tree are names of implicit exertion arguments (free variables). Each exertion has a single data context as the explicit argument. Paths of the data context form implicit domain specific inputs and outputs used by service providers. Context input associations are used by the providers to compute output associations that are returned in the output context. The context mapping is defined by an exertion signature that includes at least the name of operation (selector) and the service type defining the service provider. Additionally, the signature may also specify the exertion's return path, the type of returned value, and QoS. Two basic signature types are distinguished and are created with the sig operator as follows: 1. sig(, Class | , ) 2. sig(, , )
10
M. Sobolewski / Unifying Front-End and Back-End Federated Services
where Class is a Java class (for an object signature) and (for a net signature) is a Java interface. Object signatures define local providers and net signatures define remote providers by unifying local/remote services in the same exertion. A selector of a signature (name of operation) may take the expanded form to indicate its data context scope by appending a context prefix after the proper selector with the preceding # character. The part of the selector after the # character is a prefix of context paths specifying the subset of input and output paths for the prefixed signature. The operator provider returns a service provider defined by a service signature: provider(Signature):Object An exertion specifies the collection of service providers including dynamically federated providers in the network. The primary signature marked by the SRV type defines its primary service provider. An exertion can be used as a closure with its context containing free variables (unassigned context paths). An upvalue is a path that has been bound (closed over) with an exertion. The exertion is said to "close over" its upvalues by exerting service providers. The exertion's context binds the free paths to the corresponding paths in a scope at the time the exertion is executed, additionally extending their lifetime to at least as long as the lifetime of the exertion itself. When the exertion is entered at a later time, possibly from a different scope, the exertion is evaluated with its free paths referring to the ones captured by the closure. There are two types of exertions: service exertions and control flow exertions. The generic srv operator defines service exertions as follows: srv( {, } , {, }): T Exertions as services have hierarchically organized data contexts (properties that describe the service data), control contexts (properties that describe the service control strategy), and associated service providers known via service signatures. For convenience tasks, batches, jobs, and blocks are defined with the task, batch, job, and block operators as follows: task(, , ):Task batch(, { }, ):Task job( [, ], , {, }):Job block(,{, , }):Block A job is an exertion with a single input context and a nested composition of component exertions each with its own input context. A job represents a mapping that describes how input associations of job’s context and component contexts relate, or interact, with output associations of those contexts. Tasks do not have component exertions but may have multiple signatures, unlike jobs that have at least one component exertion and a signature is optional. A task is an elementary exertion with one signature; a batch task or simply batch has multiple signatures with a single shared context for all signatures. A block is a concatenation of component exertions with a shared context that provide a block scope for all exertions in the block. There are eight interaction operators defining control flow exertions. An interaction operator could be one of: alt (alternatives), opt (option), loop (iteration), break, par (parallel), seq (sequential), pull (asynchronous execution), and push (synchronous). The interaction operators opt, alt, loop, break have similar control flow semantics as those defined in UML sequence diagrams for combined fragments.
M. Sobolewski / Unifying Front-End and Back-End Federated Services
11
Exertions encapsulate explicitly data, operations, and a control strategy for the collaboration. The SOS dynamically binds the signatures to corresponding service providers—members of the exerted federation. The exerted members in the federation collaborate transparently according to the exertion’s control strategy managed by the SOS. The SOS invocation model is based on the Triple Command Pattern that defines the federated method invocation (FMI) [7]. A task is an exertion with a single input context as its parameter. It may be defined with a single signature (elementary task) or multiple signatures (batch task). A batch task represents a concatenation of elementary tasks sequentially processing the sameshared context. Processing the context is defined by signatures of PRE type executed first, then the only one SRV signature, and at the end POST signatures if any. The provider defined by the task’s SRV signature manages the coordination of exerting the other batch providers. When multiple signatures exist with no type specified, by default all are of the PRE type except the last one being of the SRV type. The task mapping can represent a function, a composition of functions, or relations actualized by collaborating service providers determined by the task signatures. There are two ways to execute exertions, by exerting the service providers or evaluating the exertion. Exerted service federation returns the exertion with output data context and execution trace available from collaborating providers: exert(Exertion {, entry(path, Object }) : Exertion where, entries define a substitution for the exertion closure. Alternatively, an exertion when evaluated returns its output context or result corresponding to the specified result path either in the exertion’s SRV signature or in its data context: value(Exertion {, entry(path, Object) } ) : Object The following getters return an exertion’s signature and context: sig(Exertion):Signature context(Exertion):Context A context of an exertion or its component exertion is returned by the context operator: context(Exertion [, path ] ) where, path specifies the component exertion. The value at the context path or subcontext is returned by the get operator: get(Context, path {, path}) :Object or assigned with the put operator: put(Context {, entry(path, Object) }):Context Exertion-oriented programming (EOP [5]) is a service-oriented programming paradigm using service providers and exertions. Exertions can be created with textual language (netlets), API (exertlets), and user agents that behind visual interactions create exertlets. Netlets are interpreted scripts and executed by the network shell nsh of the SORCER Operating System (SOS). Invoking the exert operation on the exertlet (Java object) returns the collaborative result of the requested service federation. Netlets are executed with a SORCER network shell (nsh) the same way Unix scripts are executed with any Unix shell [8]. In EOL service providers are uniformly accessed through two types of references: class and interface signatures. Class and interface signatures are also called object and net signatures correspondingly. The former is used for specifying local service, the
12
M. Sobolewski / Unifying Front-End and Back-End Federated Services
latter for network services. Therefore, any combination of object and net signatures can unify both local and remote services within the same exertion that refers to the corresponding service federation managed by the SOS.
3. Unification of Procedural and Declarative Services: Exertions and Models Usually computing and business processes are distinguished as semantically different ones. On the one hand a computing process is an instance of a computer program that is being executed. A computer program, or just a program, is a sequence of instructions, written to perform a specified computation with a computer. On the other hand a business process is a collection of related, structured activities or tasks that produce a specific service or product for a particular requestor or requestors. A project can be broken into tasks then each task can be broken down into assignments that have a defined start and end time for completion. A collection of assignments on a project puts the task under execution. Project, task, and assignment dependency that specifies how they rely on each other to execute the project requires a control strategy. The ill-defined strategy can lead to the stagnation of a project when many tasks cannot get started unless others are finished correctly. In service management, a service is an activity that needs to be accomplished within a defined period of time or by a deadline to work towards domain-specific goals. In service-oriented approach everything anytime anywhere is considered as a service. That means that either a computer program or business process can be uniformly organized hierarchically from services. In that approach all steps of the process expression and its actualization are uniform services. Regular thinking is that a service requestor asks for a provider's service so services are always actions of providers (that exist at the back-end). Now, if everything is a service then the service request is a service as well. But services are usually created and composed (aggregated) at the back-end. That approach requires always programming new service providers by experts and software developers (low level programming— executable codes). In SORCER the back-end programming of composing services is usually shifted to the front-end programming by the end users—not professional programmers. Usually, a service written at the back-end and the front-end are quite different is style and semantics so the term exertion is referred to a front-end service program—requestor's service. SORCER introduces exertion-oriented language and par/var-oriented modeling languages (mogramming at the front-end, similarly to shell programming, for example, in Unix). In exertion-oriented programming process expressions are called exertions. An exertion exerts the abilities of a service federation to perform a service (job and block exertions are business projects; batch exertions are business tasks; elementary task exertions are business assignments). In object-oriented programming everything is an object, so for example an instance of a class is an object and the class is an object as well. By analogy in service-oriented programming, an instance of exertion—a service federation—is a (back-end) service and the exertion itself is a (front-end) service. Therefore an exertion is a classifier of its service federations like in object-oriented programming a class is a classifier of its instances. The exertion-oriented programming is drawn primarily from the procedural semantics of a routine but par/var-oriented programing from the semantics of a function composition of declarative service variables. In every computing, process
M. Sobolewski / Unifying Front-End and Back-End Federated Services
13
variables represent data elements and the number of variables increases with the increased complexity of problems being solved. The value of a computing variable is not necessarily a part of an equation or formula as in mathematics. In computing, a variable may be employed in a repetitive process: assigned a value in one place, then used elsewhere, then reassigned a new value and used again in the same way. Handling large sets of interconnected variables for transdisciplinary computing requires adequate programming methodologies. A service parameter (for short a par) is a special kind of variable, used in service contexts to refer to one of the named pieces of data to a service used as either the passive value or the active value. The active value is the value calculated by a par’s procedural attachment only when requested. Therefore, each par has an argument (value) associated with a name such that its name is a path in the associated service context and the value of the path in the context is the par itself. However, the value of par is to-be the result of evaluation: Evaluation#getValue() or invocation Invocation#invoke(Context); otherwise the par’s value is as-is. The parameter Context in invoke(Context) refers to the context to be appended to the current context associated with the par, if any. The current context associated with a par defines the scope of its invoker’s formal parameters. Therefore, invokers play a role of procedural attachment in service contexts and context-based models. Note that par values are defined as above in all Context types, however values of other objects of Evaluation or Invocation types (not pars) are returned as-is in ServiceContexts, but in Modeling contexts both pars and all other objects implementing Evaluation or Invocation types are returned with to-be semantics. As-is and to-be context semantics are the major differentiators between ServiceContext type and Modeling types (par-models and var-models [3]). A service variable (var) is a collection of triplets:{ }, where: 1. An evaluator is a service with the argument vars that define the var dependency chain. 2. A getter is a pipeline of filters processing and returning the result of evaluation. 3. A setter assigns a value that is a quantity filtered out from the output of the current evaluator. The var value is invalid when the current evaluator, getter, or setter is changed, current evaluator's arguments are changed, or the value is undefined. VOP is a programming paradigm that uses vars to design var-oriented multifidelity compositions. A triplet is called a var fidelity. It is based on dataflow principles where changing the value of any argument var should automatically force recalculation of the var’s value. VOP promotes values defined by selectable var fidelities and their dependency chains of argument vars to become the main concept behind any processing. Evaluators, getters, and setters can be executed locally or remotely. An evaluator may use a differentiator to calculate the rates at which the var quantities change with respect to the argument vars. Multiple associations of can be used with the same var allowing var’s fidelity. The semantics of the value, whether the var represents a mathematical function, subroutine, coroutine, or data, depends on the evaluator, getter, and setter currently used by the var. The var dependency chaining
14
M. Sobolewski / Unifying Front-End and Back-End Federated Services
provides the integration framework for all possible kinds of computations represented by various types of evaluators including exertions. Var-Oriented Modeling is a modeling paradigm using vars in a specific way to define heterogeneous var-oriented models, in particular large-scale multidisciplinary models including response, parametric, and optimization models. The programming style of VOM is declarative; models describe the desired results of the output vars, without explicitly listing instructions or steps that need to be carried out to achieve the results. VOM focuses on how vars connect (compose) in the scope of the model, unlike imperative programming, which focuses on how evaluators calculate. VOM represents models as a series of interdependent var connections, with the evaluators/filters between the connections being of secondary importance. A var-oriented model or simply var-model is an aggregation of related vars. A varmodel defines the lexical scope for var unique names in the model. Three types of models: response, parametric [11], and optimization [12] have been studied to date. These models are declared in VML using the function composition syntax and possibly with EOL and the Java API to configure the vars. The inputvar is typically the variable representing the value being manipulated or changed and the outputvar is the observed result of the input vars being manipulated. If there is a relation specifying output in terms of given inputs, then output is known as an "output var" and the var’s inputs are "argument vars". Argument vars can be either output or input vars. A function composition of a var is a way to combine simple argument vars to build more complicated ones. Like the composition of functions in mathematics, the result of each var is passed as the argument of the next, and the result of the last one is the result of the whole. The functions of the model correspond to fidelities of vars. A single var can define multiple functions—multiple fidelities. The central exertion principle is that a computation can be expressed and actualized by the interconnected federation of simple, often uniform, and efficient service providers that compete with one another to be exerted for their services in the dynamically created federation. Each service provider implements multiple actions of a cohesive (well integrated) service type, usually defined by an interface type. A service provider implementing multiple service types provides multiple services. Its service type complemented by its QoS parameters can identify functionality of a provider. In an exertion-oriented language (EOL) a service exertion can be used as a closure over free variables in the exertion’s data and control contexts. In exertion-oriented programming everything is a service. Exertions can be used directly as service providers as well (see Fig. 3). The par/var-oriented programing is drawn primarily from the semantics of a variable, the exertion-oriented programming from the semantics of a routine. Either one can be mixed with another depending on the direction of the problem being solved: top down or bottom up. The top down approach usually starts with var-oriented modeling in the beginning focused on relationships of pars/vars in the model with no need to associate them to services. Later the var-model may incorporate relevant services (evaluators/getters/setters) including exertions as getters. In var-oriented modeling three types of models can be defined (response, parametric, and optimization) and in exertion-oriented programming three different types of exertions (tasks, batches, blocks, and jobs).
M. Sobolewski / Unifying Front-End and Back-End Federated Services
15
In Fig. 4 three service clouds are depicted that collaborate for the execution of front-end exertion E-fe. The SOS shell by exerting E-fe with services of the SOS cloud unifies the front-end federation specified by E-fe with federations created by back-end exertions (as evaluators) in vars of models in the model cloud.
Figure 4. Managing transdisciplinary complexity with convergence of service-oriented modeling and programming (top: SOS service providers; bottom-left: service providers and exertion evaluators in the application cloud; bottom-right: models as service providers in exertions with local evaluators and remote evaluators in the application cloud).
4. Conclusions Data and control interoperability is exemplified in SORCER via service contexts (DaaS and CaaS) as associative local/distributed memory defined explicitly by requestors in exertions (front-end services) or provided by contexters (back-end services). Data and control contexts return values directly but active service contexts in the form of parand var models return results of invocations or evaluations respectively. The former provides values by procedural attachment, the latter by function compositions of var fidelities. All front-end services: contexts, models, and exertions can be used as process expressions but also can be used as process actualizations (service providers). Actualization of front-end services is done by dependency injection of service beans (contexts, models, exertions, and business objects exposing SORCER service types) into a generic service provider container (ServiceProvider). Moving to back-end easily created and updated exertions by the end users is the key strategy in reducing complexity of IPD systems. It allows for exertions, contexts, and models to become directly available as back-end service providers that normally are developed by experts and software developers that cope constantly with the compatibility, software, and system integration issues that become more complex.
16
M. Sobolewski / Unifying Front-End and Back-End Federated Services
With object and net signatures, local or remote service can be mixed and unified by the same exertion. Just by replacing in an exertion signature a provider’s class with its implemented interface the service is becoming remote and vice versa. The SORCER platform integrates three programming styles: context-driven, exertion-oriented (procedural) programming, and par/var-oriented (declarative) modeling. The SORCER platform has been successfully deployed and tested for the engineering mogramming in multiple applications at AFRL/WPAFB [3, 9, 10, 11, 12].
Acknowledgment This work was partially supported by Air Force Research Lab, Aerospace Systems Directorate, Multidisciplinary Science and Technology Center, the contract number F33615-03-D-3307, Algorithms for Federated High Fidelity Engineering Design Optimization and the National Natural Science Foundation of China (Project No. 51175033).
References [1]
R.M. Kolonay, Physics-Based Distributed Collaborative Design for Aerospace Vehicle Development and Technology Assessment. In: C. Bil et al. (eds.) Proceedings of the 20th ISPE International Conference on Concurrent Engineering, IOS Press, 2013, pp 198-215, http://ebooks.iospress.nl/publication/34808, Accessed 15 March 2014. [2] M. Sobolewski, Object-Oriented Metacomputing with Exertions, In: Gunasekaran A, Sandhu M (eds.) Handbook On Business Information Systems, World Scientific, Singapore, 2010. [3] M. Sobolewski (2014) Service Oriented Computing Platform: An Architectural Case Study. In R. Ramanathan and K. Raja, Handbook of Research on Architectural Trends in Service-Driven Computing, Vol. 1, Chapter 10. Hershey, PA: IGI Global, 2014. doi:10.4018/978-1-4666-6178-3. [4] M. Sobolewski, Exerted Enterprise Computing: From Protocol-Oriented Networking to ExertionOriented Networking, In: Meersman R et al. (eds.) OTM 2010 Workshops, LNCS 6428, 2010, SpringerVerlag Berlin Heidelberg, pp 182– 201. [5] M. Sobolewski Exertion Oriented Programming, International Journal on Computer Science and Information Systems, vol. 3, no. 1, (2008) pp 86-109. [6] M. Sobolewski, Provisioning Object-oriented Service Clouds for Exertion-oriented Programming. The 1st International Conference on Cloud Computing and Services Science, CLOSER 2011, Noordwijkerhout, the Netherlands, 7-9 May 2011, SciTePress Digital Library. [7] M. Sobolewski, Metacomputing with Federated Method Invocation, In: Hussain MA (ed.) Advances in Computer Science and IT, In-Tech, Rijeka, (2009) pp 337-363. [8] M. Sobolewski, R.M. Kolonay Unified Mogramming with Var-oriented Modeling and Exertionoriented Programming Languages, Int. J. Communications, Network and System Sciences, (2012) 5, 579-592. Published Online September 2012 (http://www.SciRP.org/journal/ijcns) [9] R.M. Kolonay, M. Sobolewski, Service ORiented Computing EnviRonment (SORCER) for Large Scale, Distributed, Dynamic Fidelity Aeroelastic Analysis & Optimization, International Forum on Aeroelasticity and Structural Dynamics, IFASD 2011, 26–30 June, Paris. [10] S.A. Burton, E.J. Alyanak, R.M. Kolonay, Efficient Supersonic Air Vehicle Analysis and Optimization Implementation using SORCER, 12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSM AIAA 2012-5520, 17-19 September 2012, Indianapolis, Indiana (AIAA 2012-5520). [11] M. Sobolewski, S. Burton, R. Kolonay, Parametric Mogramming with Var-oriented Modeling and Exertion-Oriented Programming Languages. In: Bil C et al. (eds.) Proceedings of the 20th ISPE International Conference on Concurrent Engineering, IOS Press, 2013, pp 381-390, http://ebooks.iospress.nl/publication/34826, Accessed on: March 9, 2014 [12] M. Sobolewski, R. Kolonay, Service-oriented Programming for Design Space Exploration, In: Stjepandić J et al. (eds.) Concurrent Engineering Approaches for Sustainable Product Development in a Multidisciplinary Environment, Springer-Verlag London, 2013, pp 995-1007.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-17
17
In a Network-centric World John C. HSU President, Systems Management and Engineering Services Professor, California State University Long Beach (Prior) Systems Engineering Senior Manager, The Boeing Company
Abstract. Rapid advances in Information Technology have resulted in revolutionary changes in the way we run our businesses and live our daily lives. Network-Centric Operations (NCO) recognizes that interdependence (sharing information among many) is vital to an organization’s future. Information must be quickly distributed, its value understood and the desired effect created. NCO occurs when systems are linked or networked by a common infrastructure, share information across geographic borders, and dynamically reallocate resources based on operational needs. NCO is an environment where seamless collaboration between networks, systems or elements within systems is possible. This network will provide decisions makers with information from thousands of cloud nodes to produce a complete picture of Cloud Manufacturing viewed as a three-dimensional chessboard. Enabling this seamless networking capability is an information and communication Strategic Architecture Reference Model (RFM). The RFM works with both legacy and future systems and platforms to ensure interoperability with nodes that follow the same set of standards. Understanding System-of-Systems Engineering (SOSE) is critical to a robust architecture development of NCO systems. There are five System-of-Systems (SoS) characteristics [1] but the dominating one is emergent behavior. This non-linear behavior will impact architecture development. We have little understanding of the principles of SOSE in which especially the dominating behavior of emergence. Proposed research subjects are Boltzmann distribution probability theory, and agent-based emergent behavior model, etc. Due to the immature development and diversified opinions, there does not exist a single unified consensus for processes involved in Systemof-Systems Engineering. Keywords. Network-Centric Operations, Cloud Manufacturing, Architecture Reference Model, Systems of Systems Engineering
Introduction During the “Operational Desert Storm” war, also called the “Persian Gulf War” (August 2, 1990 – February 28, 1991), the concept of “Network Centric Operations (NCO)” was conceived. It was realized that the United States Department of Defense (DoD) acquired weapon systems in isolation. But it does not use weapon systems in isolation. All the systems are required to work together at the same time in the battle field to win a war. They have to be networked to exchange information for the right information in the right form to the right place at the right time for the right decision to enable their warfighting capabilities. The NCO applications are not restricted to military only. In fact, there are more commercial NCO applications than military, for example, Global Communications, Navigation, and Surveillance System (GCNSS) that can be used for Tsunami warning, weather forecasting for National Oceanic and
18
J.C. Hsu / In a Network-Centric World
Atmospheric Administration (NOAA), e-Enabled Airline for the Integration of airplanes, people, and operations, and enhancing the global Next Generation (NextGen) Air Traffic Management (ATM), in a System-of-Systems (SoS) environment (everything and everyone connected). We are now in the Information Age – the second industrial revolution, according to John Chambers, the CEO of Cisco Systems, Inc. We are drowning in information [2]; immersed in data surrounded by standalone information systems and starving for knowledge. NCO is the solution and is an environment where collaboration between platforms, systems, and devices, such as satellites, aircraft, or PDAs, is possible. For an element to become a network-centric node on a network, it needs to use a common information and communication architecture. It is illustrated in Figure 1. Once an element becomes a node, it has the ability to function and collaborate with other nodes both inside and outside its resident domain. As more nodes are introduced into the environment, the network becomes more robust, much like the growth and expansion of the Internet. And like the Internet, network centric nodes depend on each other to provide multiple streams of connectivity for the movement of information from point to point.
Figure 1. Common Architecture.
One of the several benefits to a network-centric environment is the increase in situational awareness whether we are operating under battle conditions, protecting our homeland, or managing an enterprise. The interoperability between networks and nodes allows decision makers to make better, more informed decisions quicker and more accurately.
1. The Commercial Applications The non-military organizations using network centric architectures are themselves very diverse, ranging from retailing and search to manufacturing and developmental. The following provides an overview on how network centric approaches and architectures are used in several different non-military organizations: 'Bricks and Mortar' Retailer – The best example is Wal-Mart. Wal-Mart's architecture is described as a sensor grid (the point of sale devices) coupled to a transaction grid that allows the entire supply train to anticipate and respond to evolving
J.C. Hsu / In a Network-Centric World
19
marketplace needs and trends [3]. Wal-Mart's assent to the top of the retail domain began in the 1960's and 70's with the building of its own distribution infrastructure. In the 1980's, Wal-Mart took a significant step as an early adapter of bar code technology which gave it the information to implement consumer trend forecasting software. In 1985 the company began development of another critical piece of technology – the "Retail Link" system that provided detailed consumer data and linked suppliers into the system. This system which became one of the centerpieces of Wal-Mart's network centric system took years to develop and cost over 4 Billion dollars to develop [4]. It has been estimated that this $4 Billion investment by Wal-Mart resulted in a 10 fold investment, i.e. $40 Billion, in information technology by its suppliers and was a major productivity driver for the economy in the late 90's [5].
Figure 2. Operational Node Connectivity Diagram for Wal-Mart’s Retail to Supplier Flows.
As Wal-Mart grew, its traditional top-down supply and demand control methods grew less effective. In response the company developed a sensory capability, primarily its point of sale, bar code reading registers and a pervasive transaction grid – feeding the sensory network inputs in near real time to the company's decision makers and its large web of suppliers. The result as explained by General Electric's Jack Welch was that "When Wal-Mart sells a (light) bulb on the register, it goes to my factory instantlyI (General Electric) make the bulb for the one they just sold. The enterprise system is now totally compressed with information." [3]. This is the Wal-Mart Network Centric System. A greatly streamlined supply chain with supplier relationships that have been largely automated. The operational node connectivity for the resulting system is illustrated in Figure 2. The physical flow shown in the figure starts when a supplier ships stock to a Wal-Mart distribution center. At this point Wal-Mart takes ownership of the stock and usually cross-decks (puts the goods on an outbound shipment without warehousing) the shipment and send to the end user stores where it is sold to a consumer [6]. On 12 July 2014, Wal-Mart was dealing with 100 million consumers weekly and over 36 million customer transactions daily [7].
20
J.C. Hsu / In a Network-Centric World
The Wal-Mart infrastructure includes over 4,253 stores, 158 distribution centers and over 2 million employees [8]. Every distribution center supports 90 to 100 stores in a 200-mile radius. On the supplier side of the system, there were over 60,000 suppliers in 2014. Wal-Mart has 2 million employees. Total amount of money spent at Wal-Mart every hour of every day is $36 Million. Wal-Mart’s revenues was $421.89 Billion, on par with the GDP, ranked the world's 25th as a country [9]. 'eCommerce' Business – The best examples are two commercial NCO architectures where the consumer/user interactions are also networked - Amazon and eBay. These businesses typically are able to use the web to do away with the physical store infrastructure and centralize their operations which exhibits a one-to-many pattern. The supply chain relationships with vendors may be very similar to that discussed to the Wal-Mart example. Basic enablers for this aspect of the business are again sensors (barcodes, sales info), a networked supply chain, a large information store with rapid processing functionality and responsive technology development ability. For the online businesses, the technology development approach is typically extended to allow significant participation by developers or organizations outside of the core organization. For example Amazon has an 'Amazon Web Services Program' that involves on the order of 30,000 to 50,000 outside developers [10] [11]. The relationship between the system and the consumers can be potentially more complex. There is no longer a store site where the consumer can physically inspect the goods. On the system's side, credit card and internet payment mechanism such as the PayPoint system allow for relatively trustworthy payments. On the consumer's side there is the retailer's reputation and the reputation of well-known or defined brands or makes of products. The lack of the ability to physically examine and compare goods and interact with live salespeople can still be a barrier to these systems.
2. Architecture Reference Model Network communications is the foundation to make systems linked or networked to share information across geographic borders and dynamically reallocate resources based on operational needs. The basics of network communications are to transmit data throughout the network, between systems, devices or computers. The data are transferred through a series of layers. Each layer can be developed and designed separately as long as the interfaces between layers are established. These layers form the Architecture Reference Model (ARM). There are many ways to describe and design the network communication layers; therefore, there are many ARMs. Over the past decade new approaches to organization and enterprise challenges have emerged using the capabilities enabled by networked systems. These networked approaches have revolutionized the means of conducting business and operations in domains across a wide spectrum of activities. These network based approaches have been used by a variety of organizations to implement network centric operations in their central activities. The purpose of a reference model is to provide a common conceptual framework that can be used consistently across and between different implementations and is of particular use in modeling specific solutions. A reference model is an abstract framework for understanding significant relationships among the entities of some environment. It consists of a minimal set of unifying concepts, axioms and relationships within a particular problem domain, and is independent of specific
J.C. Hsu / In a Network-Centric World
21
standards, technologies, implementations, or other concrete details. It is in an abstract format leading to reality for understanding relationships. An ARM is intended to provide a high level of commonality, with definitions that should apply to all architecture models in that category or application. It is a description of all of the possible software/hardware components, component services (functions), and the relationships between them. It further describes how these components are put together and how they will interact. It enables the development of specific architectures using consistent standards or specifications supporting that environment. In the network communications domain, an ARM basically consists of three layers as shown in Figure 3: They are the Top “Application” Layer comprising application and presentation aspects; the Middle “Internetwork” Layer comprising the transport and network aspects and the Lower “Hardware” Layer comprising the data link and physical connection aspects. Each layer can be decomposed to more layers. How many layers and types of layers depending on the specific applications or system requirements for each ARM. They are: Open Systems Interconnection Reference (OSI) Model, Global Information Grid (GIG) Reference Model, DoD Technical Reference Model, Strategic Architecture Reference Model, and Net-Centric Company Architecture Reference Model. There are more ARMs than mentioned here. They can be classified to four categories based on applications:
Figure 3. Basic Layers for Architecture Reference Model.
1.
Internet Application includes OSI Model and Internet Model.
2.
Military and Government Application includes Global Information Grid Model and DoD Technical Reference Model and Federal Enterprise Consolidated Reference.
3.
General (Military and Commercial) Application includes Strategic Architecture Reference Model and The Open Group Architecture Framework (TOGAF) Technical Reference Model (TRM).
4.
Company and Organizational Application include Net-Centric Company Model.
22
J.C. Hsu / In a Network-Centric World
The ARMs proposed for different applications are basically consisting of three layers. They are Application Layer, Internetwork Layer and Hardware Layer. As shown in Figure 4, each layer will be expanded with increasing levels of detail and specificity at each successive level from an abstract decomposition of the functional units of a network node to specifications for the component pieces used to implement the functionality. The OASIS Service Oriented Architecture (SOA) Reference Model produced an IT industry standards body [12]. This Service Oriented Architecture (SOA) Reference Model is an abstract framework for understanding significant entities and relationships within a service-oriented environment, and for developing consistent standards/specifications supporting that environment. Because SOA makes use of the concept of web services, it is viewed as a key foundation to achieving the GIG. One of the greatest challenges to the GIG has to do with the acquisition planning, funding and scheduling of the associated business models for SOAs.
Figure 4. ARM Follows a Top-Down Design.
The second industrial revolution is the Net-Centric Operations (NCO). The deployment of NCO depends on network communications. The basics of network communications are to transmit data throughout the network. The data are transferred through a series of layers. ARM defines the arrangement and composition of layers. The details of this section can be referred to Hsu [13].
3. System-of-Systems Engineering The network centric infrastructure consists of the network, networked sensors and networked information stores and analysis functions. Each node is an independent system. The participants, suppliers and consumers are all independent systems; therefore, by nature, NCO is a system-of-systems (SOS). To design and/or develop systems architecture for NCO needs to understand the principles of System-of-Systems Engineering (SOSE). Unfortunately, the development and understanding of SOSE fundamentals is at infant stage. SoS exhibits emergent behavior that adds more complexity to SOSE. This non-linear emergent behavior will impact architecture development, risk management, verification and validation strategy, reliability and maintainability assessments, and trade study methodology. Unplanned, unexpected behavior is expected to emerge between component systems. Emergent behaviors are characteristics that arise from the cumulative actions and interactions of the
J.C. Hsu / In a Network-Centric World
23
constituents of a SoS. It displays a global complexity that cannot be adequately managed by hierarchical structures and central control. The behavior and/or performance of the SoS cannot be represented in any form that is simpler than the SoS itself. There is no simple way (i.e. simpler than the SoS itself) to relate the functions of the parts to the functions of the whole. The traditional hierarchical functional decomposition is no longer valid due to the non-linear characteristics of emergent behavior; however, since the emergent behavior is non-existent in each component system, the hierarchical functional decomposition is still applicable to component system level. The first challenge of architecting a SoS is at the top SoS level incorporating the emergent behavior. The next challenge is how to flow down the SoS level architecture to the component system level if they are hierarchical structures especially for the legacy systems. The model-based architecture-centric approach may be one of the answers. The customer requirements in the form of CONOPS (Concept of Operations) model(s) are captured in the SoS architecture model(s). The component system architecture models can continue to capture CONOPS of component system level and the data flow from the upper SoS-level architecture. The subsystem architecture models can continue to capture CONOPS of subsystem levels (if there are any) and the data flow from the component system-level architecture. In this architecture top down development sequence the layered architecture models are developed and shown in Figure 5. In a layered model, the overall SoS is broken down into different collections of services, with each collection expressing the services that are available to layers above it in the “protocol stack”. Layered architectures allow different developers to work in parallel and insure that changes in one layer of the protocol do not interfere with operations above and below that layer. Thus, layered architectures implement loose coupling between the services that makes up the overall SoS. System design including hardware and software will be based on architecture models in different levels. The details of this section can be referred to the course materials [14].
Figure 5. Layered Architectures of a System-of-Systems.
24
J.C. Hsu / In a Network-Centric World
4. Conclusions The applications of the network centric systems are unlimited for both military and non-military. There are more commercial applications, such as, Wal-Mart, Amazon and eBay as mentioned in the above, including applying to Cloud manufacturing, Service Clouds and e-Enabled airline operations, etc. The network centric systems in the commercial domain displayed a number of interesting capabilities. These included the means to integrate extremely large groups of users into effective systems-ofsystems with time constants measured on the order of hours/days. The basic network centric enablers used in these systems fall into the following but not limited to these categories: 1.
The inclusion of more and more participants into the system-of-systems, from suppliers to consumers to developers.
2.
The means to address manipulation of the system by participants.
3.
The incorporation of near real-time feedback and monitor mechanisms at the system-of-system level (eBay and Wal-Mart) for extremely large scale systems.
4.
A shared situational awareness in most cases.
Specific Skills Required for NCO: Systems Engineering - Focuses upon the design of a complex system that involves many individual systems. The skills that have to be developed: 1.
System-of-Systems Engineering.
2.
Architecture Framework.
3.
Requirements and Functional Modeling.
4.
Other pertinent systems engineering skills. Large-scale Systems Integration:
1.
Organizational competency beyond design-oriented aspects and includes production and supplier management, etc.
2.
Ability to manage many tasks that are needed to produce a solution that meets customer’s needs. Networking Technologies:
1.
Network theory.
2.
Communication technology.
3.
Hardware.
The understanding and research of SOSE is at the beginning phase. Proposed research subjects include Boltzmann distribution probability approach, agent-based emergent behavior model, statistical distributions, and optimizing interoperations of network systems, and other methods. Due to the immature development and
J.C. Hsu / In a Network-Centric World
25
diversified opinions, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering.
References [1] M. Maier, Architecting Principles for Systems-of-Systems, Proceeding of the 6th Annual INCOSE Symposium, 1996, pp. 567-574. [2] J.C. Hsu, M. Butterfield, Modeling Emergent Behavior for Systems-of-Systems, 17th Annual INCOSE International Symposium, 2007. [3] A. K. Cebrowski, J. J. Garstka, Network-Centric Warfare: Its Origin and Future, Naval Institute Proceedings, January 1998. [4] S. Hornblower, Always Low Prices, PBS Frontline, November 2004. [5] M. Schrage, Wal-Mart Trumps Moore's Law (In the Weeds), MIT Technology Review, March 2002. [6] DLA Public Affairs, Vendor Initiated Parts Resupply (VIPR) partners DLA, AMC -- Vendors gain increased visibility and responsibility, USTRANSCOM NEWS SERVICE, July 2004. [7] Wal-Mart Statistics, http://www.statisticbrain.com/wal-mart-company-statistics/, 12 July 2014. [8] Wal-Mart Logistics, http://corporate.walmart.com/our-story/our-business/logistics, 2014. [9] Fortune 500, http://www.businessinsider.com/25-corporations-bigger-tan-countries-2011-6?op=1, 2011. [10] J. Akin, Amazon Everywhere, PCMag.com, September 16, 2003. [11] J. Foley, Amazon CTO: 'We've Just Scratched The Surface, July 26, 2004. [12] M.C. MacKenzie, K. Laskey, F. McCabe, P. Brown, R. Metz, OASIS: Reference Model for Service Oriented Architecture, Committee Draft 1.0, 7 February 2006. [13] J.C. Hsu, S. Raghunathan, R. Curran, The Applicability of Architecture Reference Models, INCOSE International Symposium, Utrecht, Netherlands, 2008. [14] J.C. Hsu, Network Centric Systems Engineering, offered at The University of California at Irvine, 2012.
26
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-26
Smart Cloud Manufacturing (Cloud Manufacturing 2.0) - A New Paradigm and Approach of Smart Manufacturing Bo Hu Lia,b,1, Lin ZHANGa, Xudong CHAIb Chinese Academy of Engineering a School of Automatic Science and Electrical Engineering, Beijing University of Aeronautics and Astronautics, Beijing, China b Second Research Institute of China Aerospace Science & Industry Corp.
Abstract. This presentation is based on the works of research and applications in Cloud Manufacturing (CMfg) carried out by the authors’ team which is composed of 28 units come from Beijing University of Aeronautics and Astronautics, Second Research Institute of CASIC, China CNR Corporation Limited, Institute of Manufacture Engineering of Chongqing University, DG-HUST Manufacturing Engineering Institute, Beijing ND Tech Corporation Limited, Wuhan University of Technology and so on. Our team proposed the “Cloud Manufacturing" concept in 2009, and began to conduct the research and practice of cloud manufacturing Version 1.0. Through the practice in recent years, with the development of related technologies, our team started the research and exploration of “smart cloud manufacturing” (cloud manufacturing version 2.0), it further develops the cloud manufacturing version 1.0 in manufacturing paradigm, technology approach, supporting technology, applications and other aspects. First of all, the meaning of Big Manufacturing is given, the challenges and countermeasures for manufacturing industries in China as well as the content and development of manufacturing informatization are introduced. Then the paradigm of smart manufacturing and the characters of smart manufacturing system of our team viewpoint are presented. The definition, concept model, system architecture, technological system, typical technical characteristics, service objects, service type, service content and service characteristic of the smart cloud manufacturing are put forward. Moreover, discussions are shown to prove that smart cloud manufacturing is a new paradigm and approach to realize smart manufacturing, which materializes and extends Cloud Computing in the manufacturing domain. Then the current status of the technologies, applications and industries for CMfg are briefly presented. Eight key technologies of technological system for the smart cloud manufacturing are briefly discussed, including (1㸧Overall technology of smart cloud manufacturing system,㸦2㸧Professional technology of smart product, (3㸧Supporting platform technology of smart cloud manufacturing system,㸦4) Smart cloud design technology, 㸦 5 㸧 Smart cloud product and equipment technology, 㸦 6 㸧 Smart cloud management technology, 㸦 7 㸧 Smart cloud simulation and experimental technology,㸦8㸧Smart cloud service technology. The research results on key technologies researched by the authors’ team are indexed. Some typical CMfg cases which have been successfully implemented in group enterprise and mid-small enterprise clusters in smart city are described. Finally, some problems worthy of attention in the further research and implementation of the smart cloud manufacturing are presented.
1
Mail:
[email protected], URL: http://www.buaa.edu.cn/bhgk/bhldjszdw/lyys/70427.htm
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-27
27
Breakthrough Innovation in Higher Education Stephen Zhi-Yang LU1 David Packard Chair in Manufacturing Engineering, University of Southern California, USA Founder, the iPodia Alliance (www.ipodialliance.org)
Abstract. Until recently, the higher-education institution was the only enterprise that did not see any fundamental changes for many centuries. Over the past decade, however, eLearning technologies have drastically increased the supply of education offers; while economic recessions have considerably reduced the demand of traditional degree education. The recent MOOCs (massively open online courses) movement makes many high-quality courses available online to everyone free of charge; and the current economic recession renders well-paid employment unattainable for many college graduates. Now that classroom lectures are free and university degrees are underwater, the higher-education enterprise has finally passed a strategic inflection point (SIP) where nothing short of fundamental changes will do – this is a perfect time for breakthrough innovation. It is clear that a 21st century university will have a vastly different shape and form than it does today. This keynote presentation introduces a breakthrough innovation in global education, called iPodia where “i” stands for inverted, interactive, and international learning. iPodia uses modern technologies to eliminate the distance of peer-to-peer interactions to enrich the learning experiences of all students at multiple universities within the iPodia Alliance. As of spring 2014, the Alliance has 10 formal members from 4 Continents, enabling over 350,000 students to learn together with each other around the clock and throughout the season. While many institutions are now using MOOC technologies to replace physical classrooms, iPodia is developing new pedagogy to reinvent classrooms on campuses. While many universities are globalizing by building classrooms-across-borders, iPodia is demonstrating a new globalization strategy to create classrooms-without-borders. It explores global diversity in local classrooms as a learning resource, rather than a hindrance, for students in order to promote “education diplomacy” – students from countries that normally wouldn’t talk with each other are now studying and working together in iPodia classrooms. This demonstrates the iPodia vision “learning together for a better world”. This presentation introduces the iPodia pedagogy and the iPodia Alliance to demonstrate how a team of elite global universities are working together to innovate borderless interactive learning as a future paradigm of global education.
1
Mail:
[email protected], URL: http://wisdom.usc.edu/stephenlu/
28
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-28
Concurrent Engineering with Internet of Things: an Extreme Learning Approach Benjamin KOOa,1, Jianzhong CHAb,2, Shuo-Yan CHOUc,3 a
Department of Industrial Engineering, Tsinghua University, Beijing, China b Beijing Jiaotong University, Beijing, China c National Taiwan University of Science and Technology, Taiwan, China
Abstract. The rapid advancement of Internet of Things (IoT) is pushing the envelope of how Concurrent Engineering can be practiced. IoT provides the pervasive real-time, and location sensitive data capabilities, forces engineered products to evolve with new communication and computational features before, during and after they are deployed to users. To cope with this IoT-driven product development speed, this article presents a Concurrent Engineering approach, called eXtreme Learning Process (XLP), where all engineered products and engineers are conceptualised as learning agents in agencies connected by the "Internet of Everything” (IoE). Instead of thinking of engineered products as concrete objects produced by distinct engineering teams, XLP-based engineering approach identifies engineering solutions as composable intellectual assets generated by decentralised learning processes. Therefore, any individual, or any organisation, can participate in this incremental learning process, and collectively design and produce new products using resources connected to the Internet ecology. To demonstrate the feasibility of XLP-based concurrent engineering approach, we have conducted a number of XLP-based product development experiments in workshop forms in several universities. Certain product development case studies will be presented and analysed in this article. We will also discuss how XLP can be used as an integrated curriculum design method to explore the future of concurrent engineering education.
1
Mail:
[email protected], URL: http://www.ie.tsinghua.edu.cn/teacher/eview.php?Tid=28 Mail:
[email protected], URL: http://mece1.bjtu.edu.cn/otherweb/EnglishVer/Faculty/Lists/1-1zhajianzhong.html 3 Mail:
[email protected], URL: http://homepage.ntust.edu.tw/sychou/ 2
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-29
29
Network-Centric Manufacturing: Making it Happen Ram D. SRIRAM1 Software and Systems Division, Information Technology Laboratory, National Institute of Standards and Technology, Gaithersburg, USA
Abstract. The early part of this millennium has witnessed the emergence of an Internet-based engineering marketplace, where engineers, designers, and manufacturers from small and large companies are collaborating through the Internet to participate in various product development and marketing activities. This will be further enhanced by the next generation manufacturing environment, which will consist of a network of cooperating engineering applications, where state of the art multi-media tools and techniques will enhance closer collaboration between geographically distributed applications, virtual reality tools will allow visualization and simulation in a synthetic environment, and information exchange standards will facilitate seamless interoperation of heterogeneous applications. In this presentation, I will discuss several technologies that are being developed to make the above vision a reality.
1
Mail:
[email protected], URL: http://www.nist.gov/itl/ssd/rsriram.cfm
This page intentionally left blank
Part II Product Lifecycle Management
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-33
33
Product Development Model for Application in R&D Projects of the Brazilian Electricity Sector João Adalberto PEREIRA a,1, Osíris CANCIGLIERI Júnior b2 and Ana Maria ANTUNES GUIMARÃES a3 a COPEL - Companhia Paranaense de Energia b PUCPR - Pontifical Catholic University of Paraná
Abstract. Through a broad approach about reputable methodologies of Product Development Process (PDP), this paper proposes a model of development to be considered in designing and management of R&D project of the Research and Development Program of the Brazilian Electricity Sector regulated by the Brazilian Electricity Regulatory Agency (ANEEL). Such projects, guided by the search for innovative products and technologies in response to technological challenges of the power sector, are well defined by PMBOK Guide of Project Management Institute as "a temporary endeavor undertaken to create a product, service or result". This means that, for a R&D project to fulfill the principle of temporariness, its management should seek to achieve pre-established goals, following appropriate processes, technologies and teams involved in problem solving principles for development until it gets the achievement of the planned product. In this sense, the objective of this work is to demonstrate that, as in the industry, PDP methodologies may be also suitable for these R&D projects, ensuring better adaptation of the results to customer needs and strengthening the relationship between university and industry in the R&D process of the electricity sector. This research addresses a compilation about several PDP models already established by the industry with a view to creating a suitable development model to the R&D projects of the electricity sector. The model emphasizes the constructive interaction between the various stages of development methodology (concepts of Concurrent Engineering) and multidisciplinary teams fully integrated with the innovation process. The proposal shows to be promising to induce to the reduction of the typical limitations of segregated and sequential activities observed in many of the projects in R&D of electric sector. Keywords. Product development process model, R&D projects, electricity sector
Introduction ANEEL (Brazilian Electricity Regulatory Agency) is the agency that regulates the Brazilian Electric Sector R&D Program [1] which has the purpose of development and technological qualification of the companies related to the sector.
1
Rua José Izidoro Biazetto, 158, 81200-240, Curitiba-PR, Brazil. E-mail:
[email protected]. Rua Imaculada Conceição, 1155, 80215-901, Curitiba-PR, Brazil. E-mail:
[email protected]. 3 Rua Emiliano Perneta, n. 756, 80420-080, Curitiba-PR, Brazil. E-mail:
[email protected]. 2
34
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
Product development was not foreseen by ANEEL in its R&D Program, it happened only in 2008 with the inclusion of categories of HS (Head Production Series), PL (Pioneer Production Lot) and MI (Market Production Insertion) in its Innovation Chain [2] (Figure 1).
Figure 1. ANEEL Innovation Chain [2].
ANEEL announced that 4,487 R&D projects had been made since the beginning of the R&D Program until the implementation of the new resolution in 2008. However, after this date, from the 1,915 new projects registered until 2009, only 117 had involvement with the Innovation Chain new stages [3], and the success of these projects as products market happened for dealers who sought, in the industrial sector, knowledge of the Product Development Process (PDP) required for the fulfillment of the Innovation Chain. It is a fact, however, that for this sector, it is possible to identify multiple models that systematize the PDP [4,5]. Another verification is that the ANEEL R&D Program is an object of analysis with little academic reference, explaining the lack of knowledge about PDP by most of R&D project managers [6], justifying that, despite advances in implementing the more robust projects, it is verified the lack of a proper development model to R&D projects with effective potential of entering the market [7]. In this sense, and through a PDP comprehensive approach, this paper proposes a specific development model for the Brazilian Electric Sector R&D Program, whose objective is to reduce recurring limitations observed in R&D projects and finally accelerate the return on investments in benefit of society through the generated products quality and consequently the services provided by energy companies.
1. Research Methodology The strategy of this research was the case study with a qualitative approach. The analysis unit was the management processes of R&D projects of the Companhia Paranaense de Energia - COPEL (that is the Paraná electrical power company) under the ANEEL R&D Program. As a technical procedure, it was initially considered the literature review about ANEEL R&D Program and its criteria, followed by exploratory study on PDP models. From the synergy between them, it was sought to define a development model suitable to the context of ANEEL Innovation Chain. Next, in order to confirm its effectiveness, the proposed model was applied in the preparation of a R&D project that could be compared with the actual counterpart project, recently executed by COPEL. It is worth noting that the using of a single unit of analysis in this research does not make it particular or less comprehensive. The proposed model is generic and can be applied by any utility that is liable to the ANEEL criteria.
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
35
2. Brazilian Electric Sector R&D Program By the late '90s, the increasing competition facing the growing demand of the energy market showed that actions to effectuate the technological development in the Brazilian electricity sector should be taken [8]. In this context, ANEEL began requiring from the electricity companies to invest in annual programs of R&D and Energy Efficiency (EE) [9], thus creating the Research and Development Program of the Brazilian Electricity Sector (ANEEL R&D Program). Over the years and the increase of representativeness, the projects have evolved not only in quantity but also in complexity. And although there is still much debate about the efficiency of ANEEL R&D Program [10,11], it is verified that with this, unprecedented collaborative relationships among energy companies, academia and industrial sector [9] were created. On the other hand, it is noticed that there is still a lack of maturing by companies in relation to the R&D development activities, remaining a lot of work to do for the program to be an effective success [12]. The conditions for submission, implementation, evaluation and monitoring of R&D projects are set by ANEEL through the Manual for Research and Development Program for the Brazilian Electric Sector [13], where activities related to the execution of R&D projects "are those creative or entrepreneurial nature with scientific-technical foundations and aimed at generating knowledge, or its innovative application for the investigation of new applications”. The global references which define Innovation and R&D are the Oslo Manual [14] and the Frascati Manual [15]. For the Brazilian Electric Sector the ANEEL own Manual for R&D [2] defines these activities [16] and, differently from Oslo Manual, emphasizes that the social-economic factor that comes from the innovative process should be considered as part the project’s results. On the other hand, while the Frascati Manual groups R&D activities into three categories (Basic Research, Applied Research and Experimental Development), ANEEL R&D Manual classifies them into six categories according to the Innovation Chain (Figure 1), making clear the intention of stimulating, not only the generation of technological innovations, but also, and from them, the development of practical solutions for the everyday of energy companies.
3. R&D Projects According to ANEEL R&D Manual, projects are grouped into six categories: Directed Basic Research (BR), Applied Research (AR), Experimental Development (ED) which require a high degree of technological innovation, while the improvements with a view to industrial producing are the focus of Production Head Series (HS), Pioneer Production Lot (PL) Product and Market Insertion (MI) [17,18]. The merit of a R&D project is defined by ANEEL through four criteria, namely: Originality, Applicability, Relevance and Cost Reasonableness. Of these, Originality is exclusionary factor for projects of BR, AR and ED being assessed according the requirements of Challenges, Advance and Products. For the Applicability it is considered the criteria of Application Context, Scope and Results Confirmation. The Relevance criterion is analyzed from the viewpoint of Professional and Technological Training as well as Socio-Environmental and Economic Impacts of the project. For the Cost Reasonableness criterion must be proven the Economic Feasibility of planned investment in a particular project.
36
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
The formalization of R&D project proposals should be done through documents that, in general, include a project description, a sheet of disbursements and a pattern file that should be sent to ANEEL [19]. The first document aims to present the proposal with the purpose of its approval. In the disbursements worksheet must be detailed the R&D project expenditures, supporting the management team in the project execution. Finally, a file in XML (eXtensible Markup Language) format formalizes the proposal by the regulatory agency [19].
4. Product Development Process There are several definitions in the literature for the PDP. For Pahl & Beitz [20] "is multifaceted and interdisciplinary activity that has as a result ... the final product documentation". For Smith [21] "is the process that converts customer needs and requirements into information in order that a product or technical system can be produced". Rosenfeld et al. [22], comprehensively, states that the PDP “is a "business process" where "develop products consists of a set of activities through which it is sought, from the needs of the market and technological possibilities and constraints and considering the company’s competitive and product strategies, get to design specifications of a product and its production process ... involves the activities of accompanying the product after the launch ... ". The idea of using product development models to structure R&D projects is justified by the principle that R&D projects should generate products and therefore the use of its concepts to build the new model which corroborates the statement that "R&D and project are so often mingled in contemporary technological language that sometimes it becomes difficult to differentiate them" [23]. As noted, a lack of product development models in R&D projects of the electric sector has generated the need for specific knowledge [24]. In this sense, based on a comprehensive literature research, initially composed by reference as [4,5,25,26], it was sought to develop an appropriate development model for ANEEL R&D Program.
5. Model for the R&D Aiming to meet the entire range of projects covered by the Innovation Chain in the preparation of the Integrated Product Development Model Oriented for R&D Projects of the Brazilian Electric Sector (MOR&D) it was considered, besides the guidelines of the R&D Program [13], recurrent product development concepts, such as: Concurrent Engineering [27,28, 29]; Stage Gate [30]; Integrated Product Development (IPD) [31]; V Model [32]; Product Based Business [33,34], among others. It was also considered their tools in order to assist in the PDP [18,22,28,29,35]. It was compiled 38 different PDP models resulting in a structure with 14 Stages, 6 Phases, 3 Macro-Phases and 3 Management activities [36], as illustrated in Figure 2.
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
Figure 2. Integrated Product Development Model Oriented for R&D Projects of the Brazilian Electric Sector - MOR&D.
37
38
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
5.1. Pre-Development Macro-phase applied to all types of Chain Innovation projects, since it is here where it is sought to align the company/product strategic planning to the R&D project planning. It begins with the Initialization phase, where from a technological need of strategic interest of the company is defined the Demand Definition as a R&D project. After demand is approved through the Strategic Directives Tests, the Planning phase of the R&D proposal is started. First with the stage of the Scope Definition in the contextual of the R&D Program Criteria and Acceptance Testing by customers, and after that Project Planning is performed, which if in accordance with the R&D Program Criteria, the Formatted Proposal can be submitted to the enrollment and authorization by ANEEL. Figure 3 illustrates the steps mentioned and their correlation with the stage of the project life cycle; and Figure 4 shows the dynamics of the activities involved.
Figure 3. Pre-development Macro-phase.
Figure 4. Activities involved in the Pre-development Macro-phase.
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
39
5.2. Development In Development macro-phase, it was sought to solve the research’s problem, synthesizing solutions, configuring and standardizing the product and registering all information in the technical documentation. This macro-phase begins with the Design phase (Figure 5), where the Study of Principles involved in the scope of the desired product are defined in the Conceptual Project, Preliminary Design and Detailed Design, simultaneously with Refinement of the Design. The activities inherent to this phase are shown in Figure 6. Projects of Directed Basic Research, Applied Research and Experimental Development are attended by these steps, allowing achieving robust prototypes that are ready for more demanding tests and directed to the production process.
Figure 5. First part of the Development macro-phase (Design phase).
Figure 6. Activities assigned to the Design phase.
40
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
The model, yet within the Development phase, follows with the Implementation phase (Figure 7), where the suitability of the new technology to production processes is performed. In this phase are held activities relating to the Manufacturing Process Design, Manufacturing and Finishing Product, which defines the final characteristics of the product as well as the Marketing Planning for the final product, reporting to the Refinement of the Design stage always that some technical feature difficult the process or increase the final cost of the product (Figure 8). Projects such as Head Production Series, Pioneer Production Lot and Market Product Insertion (Figure 1) are eligible for this stage of the development process.
Figure 7. Second part of the Development macro-phase (Implementation phase).
Figure 8. Activities assigned to the Implementation phase.
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
41
5.3. Post-Development This macro-phase does not include Innovation Chain projects, however, it must be part of the R&D projects planning process that aim the launching of new products. As illustrated in Figure 9, it includes activities related to the production, monitoring, maintenance, release and withdrawal of the product from market. The Post-Development begins with the Product Release step, followed by the Post-Release Review step, for which the product and manufacturing process updates are performed in order to customers services. These activities must occur simultaneously to the Discontinue Product step, which, in practice, starts during the production process. The company should always be prepared to execute the end of plan, since the useful life of a product depends on the satisfaction of its client and / or when no longer present economic or strategic advantages.
Figure 9. Post-Development macro-phase.
Figure 10. Activities assigned to the Post-Development macro-phase.
42
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
6. Case Study In this case study the MOR&D was applied in the restructuring of a recently finalized R&D project aiming to estimate its efficiency based on the correlation of results. Characterized as ED in the Innovation Chain, this reference project was the development of an electric field sensing equipment to serve as accessory of a safety helmet to alert the electrician when excessive proximity to the energized electrical network [37,38]. Table 1 shows the distribution of the project’s execution steps. In the left column we have the stages sequence according to the MOR&D (Figure 6) and in the right column, we have the steps according to the R&D project design at the time it was planned. [37]. Table 1. Project steps comparison: MOR&D X Real Project [37].
Steps 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Conceptual Planning Steps (MOR&D - Figura 6) Development planning process Processing of input data Conceptual equipment proposition Electronic circuits Design Embedded software design Study about materials to enclosure Communication systems design Systems engineering design Systems Integration Functional laboratory testing Electromagn. compatibility testing Standards Based Tests Project aiming at industrial design Refinement Design Assembly of reference prototype Delivery of technical reports Project closing
Real Project Steps (Project documentation [39]) Acquisition/evaluation of sensors Establishing of the safety distance Electronic device Development
Laboratory and field evaluation Standardization of the new device
Transfer of technology Conclusion of the project
Table 2 lists the necessary expertise to the research team, with the left column, according to the schedule indicated in MOR&D (Figure 4) and in the right column, the expertise allocated as design adopted will then defining the proposal. Table 2. Expertize comparison: MOR&D X Project conducted [37].
Conceptual Expertise Physics expert Electronics engineer Electrotechnical engineer Mechanics engineer Electromagnetic compatibility expert Software designer Industrial designer Safety engineering expert
Project conducted Expertise [39] Physics expert Electronics engineer Electrotechnical engineer Electromagnetic compatibility expert
Safety engineering expert
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
43
In Table 3 are shown the results expected and done for both discussed planning lines. At first, based on MOR&D, the final object would be a robust prototype, with traces of own design for direct application in field tests which coincides with the predicted target for the conducted project [37]. In the second column, it is illustrated the actual prototype obtained [38], a direct result of the project conducted. Table 3. Final objective comparison: MOR&D X Project conducted [37,38].
Conceptual Prototype
Project conducted Prototype [39]
7. Considerations The research has allowed important conclusions on the application of MOR&D, contributing to its improvement as well as showing the importance of having a proper planning in R&D projects elaboration that aim to market products. From the comparison between the two projects, it can be concluded that the projected steps based on MOR&D are more comprehensive than those of the executed project. It was predicted, for example, steps aimed at adapting the prototype to the design criteria (ergonomic and industrial), and due to this, it was necessary industrial projects specialists and designers. This fact, not foreseen in the real project was crucial to the completion of a robust prototype, as it should be (Table 3). It is also observed that despite the team that carried out the project was composed of experts in materials engineering, it has not proposed a step for the design of the enclosure that would house the equipment. This, in practice, forced an emergency realignment during its execution, delaying the project completion and adding extra costs to what was predicted. On the other hand, the MOR&D in practical application has shown that it would be interesting adding to the flowchart of Figure 6 a "Field Tests" step, before finishing the final prototype with a branch feedback for "Input and Retrofit Information" stage in case of need for corrections or adjustments.
8. Conclusion From the initial literature review on ANEEL R&D Program, it was found the lack of a model with systematic configuration of specific activities for conducting R&D projects in the Brazilian electric sector, fact that characterizes the original contribution of this work. In this sense, through analysis of established product development processes models, it was sought to compose a comprehensive model serving as a framework for R&D projects elaboration of the ANEEL R&D Program. The proposal showed promising since it allows the reduction of the segregated and sequential limitations, typical of R&D projects carried out so far, leading to efficient
44
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
development process which enables the realization of superior quality products in competitive time and prices. However, it is known that only a practical application of the proposed model, as demonstrated here, is not enough to make it fully applicable. In this sense, other ANEEL R&D projects were and has been structured [18,35,36,39] having the MOR&D as background, which results and conclusions are contributing to the model refinement.
9. Acknowledgments The authors are thankful for the financial and technical support provided by the Companhia Paranaense de Energia (COPEL), Pontifical Catholic University of Paraná (PUCPR), Institute of Technology for Development (LACTEC) and Brazilian Electricity Regulatory Agency (ANEEL).
References [1] Lei 9.991 de 24 de julho 2000. Diário Oficial da União. Brasília, DF, Brazil, 2000. [2] Manual do programa de pesquisa e desenvolvimento tecnológico do setor de energia elétrica. Agência Nacional de Energia Elétrica (ANEEL). Brasília, DF, Brazil, 2008. [3] Setor elétrico no caminho da inovação. Revista Pesquisa e Desenvolvimento da ANEEL, n.3, Gráfica Renascer, Brasília, DF, Brazil, 2009. [4] C.R. Amigo & H. Rozenfeld, Modelos de referência para o processo de desenvolvimento de produtos: descrição e análise comparativa. Proceedings of XVIII SIMPEP – Simpósio de Engenharia de Produção, Bauru, SP, Brazil, 2011. [5] E. Romeiro Filho, C.V. Ferreira, P.A.C. Miguel, R.P. Gouvinhas, R.M. Naveiro, Projeto do Produto. 1st edn. Elsevier, Rio de Janeiro-RJ, Brazil, 2010. [6] F. M. Pompermayer, F. De Negri, J.M.P. Paula, L.R. Cavalcante, Rede de pesquisa formada pelo programa de P&D regulado pela ANEEL: abrangência e características. Inovação tecnológica no setor elétrico brasileiro: uma avaliação do programa de P&D regulado pela ANEEL. 1st edn. IPEA, Brasília, DF, 2011. [7] Avanços tecnológicos no setor elétrico. Revista Pesquisa e Desenvolvimento da ANEEL, n.4, Gráfica Editora Olivieri Ltda., Brasília, DF, Brazil, 2011. [8] F.L.A. Souza, Pesquisa e desenvolvimento no setor elétrico: a caminho da inovação. 1st edn. Eletropaulo Metropolitana de Eletricidade, São Paulo, SP, Brazil, 2008. [9] J. Chapieski, Proposta de método para seleção de projetos de P&D em empresas distribuidora de energia elétrica. Dissertação de Mestrado LACTEC/PRODETEC/IEP, Curitiba, PR, Brazil, 2007. [10] F.L.A. Souza & R. Nicolsky, Uma alternativa para a consolidação e institucionalização do P&D. Proceedings of XVIII Seminário Nacional de Produção e Transmissão de Energia Elétrica, Curitiba, PR, Brazil, 2005. [11] C.N.M. Wandelli, L.P.P. Giffoni, T.C.M. Mendes, Avaliação dos resultados obtidos na recente experiência de Furnas em P&D. Proceedings of XVIII Seminário Nacional de Produção e Transmissão de Energia Elétrica. Curitiba, PR, Brazil, 2005. [12] J.A. Fernandino & J.L. Oliveira, Arquiteturas organizacionais para a área de P&D em empresas do setor elétrico brasileiro. RAC, v.14, n.6, art.5, pp. 1073-1093. Curitiba, PR, Brazil, 2010. [13] Manual do programa de pesquisa e desenvolvimento tecnológico do setor de energia elétrica. ANEEL. Available at: http://www.aneel.gov.br. Brasília, DF, Brazil, 2012. [14] Oslo Manual: Guidelines for collecting and interpreting innovation data. OECD (Organization for Economic Co-Operation and Development), Rio de Janeiro, RJ, Brazil, 2005. [15] Frascati Manual: Proposed Standard Practices for Surveys on Research and Experimental development. OECD (Organization for Economic Co-Operation and Development), Paris, 2002. [16] A.F. Cabello & F.M. Pompermayer, Impactos qualitativos do programa de P&D regulado pela ANEEL. Inovação tecnológica no setor elétrico brasileiro: uma avaliação do programa de P&D regulado pela ANEEL. 1st edn. IPEA, Brasília, DF, Brazil, 2011.
J.A. Pereira et al. / Product Development Model for Application in R&D Projects
45
[17] N. Neves, Critérios de avaliação e seleção de projetos para o programa de P&D da ANEEL. Dissertação de mestrado. Programa de Pós-graduação em Tecnologia da Universidade Tecnológica Federal do Paraná (UTFPR), Curitiba, Paraná, Brazil, 2011. [18] J.A. Pereira, O. Canciglieri Júnior, Multidisciplinary systems concepts applied to R&D projects promoted by Brazilian Electricity Regulatory Agency (ANEEL). In J. Stjepandić et al. (eds.), Concurrent Engineering Approaches for Sustainable Product Development in a Multi-Disciplinary Environment, Springer-Verlag, London, 2013, pp. 39-50. [19] Manual de elaboração de propostas de projetos de P&D. ANEEL. Available at: http://www.aneel.gov.br. Brasília, DF, Brazil, 2008. [20] G. Pahl, W. Beitz, Engineering design: A systematic approach. 2nd ed. Springer Press. Darmstadt, Germany, 1988. [21] D.W. Smith, Introducing EDG students to the design process. Proceedings of the Annual Midyear Meeting of the Engineering Design Graphics Division of the American Society for Engineering Education, Berkeley, CA, USA, 2002. [22] H. Rozenfeld, F.A. Forcellini, D.C. Amaral, et al., Gestão de desenvolvimento de produtos: Uma referência para a melhoria do processo. 1st ed. Saraiva Press, São Paulo, SP, Brazil, 2006. [23] M. Asimow M, Introdução ao projeto de engenharia. Mestre Jou. São Paulo, SP, Brazil, 1968. [24] J.A. Pereira & O. Canciglieri Júnior, Conceitos de sistemas multidisciplinares aplicados ao desenvolvimento de projetos de P&D fomentados pela ANEEL. Proceedings of XVIII SIMPEP, Bauru, SP, Brazil, 2011. [25] V.G.R. El Marghani, Modelo de processo de design. Blucher Acadêmico, São Paulo, SP, Brazil, 2011. [26] R.R.B. Silva, Proposta de estruturação do processo de desenvolvimento de produtos para empresas prestadoras de serviço de telecomunicações. Dissertação de mestrado. Programa de Pósgraduação em Engenharia de Produção da Universidade Federal do Paraná (UFPR), Curitiba, PR, Brazil, 2013. [27] B. Prasad, Concurrent engineering wheels. RASSP Digest, vol. 3, 1st edn. Troy, MI, 1996. [28] F.N. Casarotto, J.S. Favero, J.E.E. Castro, Gerência de projetos/engenharia simultânea. 1st edn. Atlas, São Paulo, SP, Brazil, 1999. [29] J.R. Hartley, Engenharia simultânea. 1st ed. Bookman Press. São Paulo-SP, Brazil, 1997. [30] R.G. Cooper, Winning at new products: Accelerating the process from idea to launch. 2nd edn. Wesley, USA, 1993. [31] M.M. Andreasen, L. Hein, Integrated product development. Springer Verlag, Berlin, Germany, 1987. [32] V Model. The Test Management Guide. United Kingdom, 2011. [33] N.F.M. Roozenburg & J. Eekels, Product design: Fundamentals and Methods. John Wiley & Sons, USA, 1995. [34] C.M. Crawford & C.A. Benedetto. New product management. MacGraw Hill, Boston, USA, 2000. [35] J.A. Pereira, O. Canciglieri Júnior, J.P. Lima, S.B. Silva, QFD application on developing R&D project proposal for the Brazilian electricity sector: a case study - System assets monitoring and control for power concessionaires. In C. Bil et al. (eds.) 20th ISPE International Conference on Concurrent Engineering, IOS Press, Amsterdam, 2013, pp. 293 - 302. [36] J.A. Pereira, O. Canciglieri Júnior, Product development model oriented for the R&D projects of the brazilian electricity sector. Applied Mechanics and Materials, vol. 518 (2014) pp 366-373, Trans Tech Publications, Switzerland, 2014. [37] A.E. Lazzaretti & P.M. Souza, P&D PD 2866-031/2006: Sensor de proximidade de rede de distribuição energizada como acessório de capacete de segurança. Final Project Report, LACTEC/COPEL, Curitiba, Paraná, Brazil, 2009. [38] A.E. Lazzaretti, M.A. Ravaglio, G.P. Resende, S. Ribeiro, R.J. Bachega, E.L. Kowalski, V. Swinka Filho, P.M. Souza, A.O. Borges, J.P. Lima, M.G.D. Voos, Simulação e medição de campos elétricos em linhas de distribuição para desenvolvimento de acessório de capacete de segurança. Proceedings of Congreso Internacional sobre Trabajos con Tension y Seguridad em Transmision y Distribucion de Energia Electrica (IV CITTES-CIER), Buenos Aires, Argentina, 2009. [39] J.A. Pereira, O. Canciglieri Júnior, A.E. Lazzaretti, P.M. Souza, Application of integrated product development model oriented to R&D projects of the Brazilian electricity sector. Proceedings of 5th ICMSE International Conference on Manufacturing Science and Engineering, Shanghai, China, 2014.
46
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-46
A Proposal on a Remote Recycling System for Small-sized E-waste Nozomu MISHIMAa, Kenta TORIHARAa, Kiyoshi HIROSEa, Mitsutaka MATSUMOTOb a Graduate School of Engineering and Resource Science, Akita University b National Institute of Advanced Industrial Science and Technology
Abstract. Material recycling of small-sized e-waste such as mobile phones etc is an emerging issue in Japan. However, in practical recycling process, the cost for recycling which heavily depends on the labor cost of manual disassembly is one of the largest problems. Outflows of e-waste to overseas often occur to avoid the high labor cost in developed countries. To reduce the recycling cost and to secure critical metals contained in the e-waste, remote recycling can be a good option. The authors have already proposed the concept of remote recycling in 2008. Recently, inspired by the start of the legislation to expand the target of recycling to small-sized EEE, we noticed that the concept is suitable for small-sized e-waste. Since it is difficult to consume much labor power for small products, physical separation methods (magnetic, pneumatic, electro-statical, etc) are being considered. But, those methods require large facilities and case-by-case adjustment of processes. Contrarily, the authors think human vision will be the most flexible and reliable method to separate valuable parts and non-valuable parts. This paper proposes a basic system of remote recycling based on tele-operation technologies, for small-sized e-waste and carries out feasibility studies of the system. Then, it discusses that concurrent designs of products, disassembly processes, teleoperation methods, social infrastructures and recycling processes are necessary to implement the system. Finally, it concludes that the remote recycling system is a promising concept that can satisfy low recycling cost, high quality and flexible separation of materials, and domestic preservation of critical metals in future. Keywords. e-waste, recycling, tele-operation, physical separation
Introduction Electrical and electronic equipment (EEE) are popular in recent world and have large impacts on environment. Recently in Japan, a new recycling legislation to cover used small and medium sized EEE has been discussed [1]. Since it is said that those smallsized e-waste contain considerable amount of precious metals and rare earths, collection and recycling of those e-waste is becoming a national concern of Japan now. In advance to the announcement of the legislation, social experiments to collect those products have been carried out in some regions of Japan. The authors have discussed some methods to increase collection rate in existing research [2]. It was also suggested that the cost-profit issue [3, 4] can be the key to implement recycling system for smallsized e-waste practically. In the new legislation, it is not available to collect recycling fee from consumers, unlike the large-sized e-waste such as air conditioner, refrigerator, etc. Thus, new social system for recycling should be economically feasible without any
47
N. Mishima et al. / A Proposal on a Remote Recycling System for Small-Sized E-Waste
subsidy or recycling fee from consumers. The purpose of this paper is to propose and discuss a totally new concept for recycling operations which can reduce the cost for recycling of small-sized e-waste drastically.
1. Problems in Recycling of Small-sized E-waste As it was mentioned in the former section, small-sized e-waste contain considerable amount of rare earths and critical metals. The amounts of such materials sometimes reach a few % of total consumptions in Japan, as shown in Table1. The table shows, for example, for Palladium, Tantalum Gold and Silver, material recovery form small-sized e-waste is rather important in circular economy. Among the small-sized e-waste, used mobile phones occupy important position in the aspect of recoverable materials. However, collections of used products containing information such as mobile phones are sometimes difficult. Table2 shows the deviation of collected amount of usedmobile phone in Japan [5, 6]. Since the annual amount of used mobile phones does not vary so much and it is around 50 million units per year, the collection rate has decreased from about 20% to 10% over these ten years. In addition, the cost-profit ratio is always a problem in material recycling, Cost-profit analysis of medium-sized ewaste [7] and small-sized e-waste [8] have been carried out in former papers. Some of the results are shown in Table3. In the case of large and medium-sized e-waste, it is said that urgent necessity to prolong life of landfill sites was one of the strongest motivations of e-waste recycling. However, such pressure doesn’t exist regarding recycling of small-sized e-wastes. In the new recycling legislation for small-sized ewaste, no recycling fee is collected from consumers. Thus, the social systems to recycle small-sized e-waste should be independently operated with affordable cost-profit balance. Since the mobile phones are small and easy-to-handle comparing to largesized e-waste, the labor cost is relatively small. However, still it exceeds the profit which can be recovered from the used product. Labor costs to recycle small-sized ewaste (mobile phones) should be drastically decreased. There are still discussions about recycling process of used mobile phones. Some recyclers are implementing manual disassembly and some are focusing on automatic separation after pulverization. Although manual disassembly is effective for high quality or low cost material recycling, manual disassembly of printed circuit board (Figure1 [9]) in which most of the valuable materials are included, is one of the most time-consuming processes in recycling of used mobile phones and all other small-sized e-waste. Therefore, a countermeasure to reduce time and cost of manual disassembly process is strongly needed to establish an effective social system for small-sized e-waste recycling.
Table 1. Potential coverage of annual consumption of certain materials by small-sized e-waste recycling
Element Coverage of annual usage (%)
Pd
Ta
W
Nd
Dy
La
Au
Ag
Cu
Zn
Pb
2.42
4.37
0.08
0.16
0.11
0.08
2.91
2.30
0.23
0.02
0.01
48
N. Mishima et al. / A Proposal on a Remote Recycling System for Small-Sized E-Waste
Table 2. Deviation of collected amount of used mobile phones
Year Collected amount (million units)
2003 11.7
2006 6.2
2009 6.9
2012 6.2
Table 3. Cost and profit estimation of e-waste
Product category
PC Mobile phones
Average material price (JPY/unit) 494 112
Total cost recycling (JPY/unit)
for
Average labor cost (JPY/unit)
2,110 Not estimated
950 145
Figure 1. Manual disassembly of used mobile phones [9]
2. Concept Proposal of Remote Recycling As it was mentioned in the former section, reducing disassembly cost is one of the keys to improve cost-profit balance of mobile phone recycling. Simply, operating manual disassembly processes at locations where labor costs are relatively inexpensive will be effective in reducing the recycling cost. However, it is not welcomed to export used products which contain considerable critical metals and rare earths, in the aspect of Japanese resource securing policy. At the same time, outflow of “waste” is restricted by Basel Convention. Thus, in our former paper [7], we have proposed a remote recycling system utilizing tele-operation technologies and named tele-inverse manufacturing. The feature of the model is that the operations for recycling are carried out via teleoperation. Figure 2 illustrates the model. Suppose that e-waste is located at location B and it is processed at the location. However, in tele-inverse manufacturing, the operators do not stay at the same location. They can stay at a different location and carry out recycling processes by tele-operation. The system image largely depends on the degree of automation. One scenario is that the operations are highly automated. In the case, location A would be like a control center. The operators control the automated plant remotely at location A. We explained “tele-inverse manufacturing” can be a concrete scenario of future recycling process and assessed the feasibility of the model. Table 4 shows some of the results of the analysis of the economical feasibility. Cost
N. Mishima et al. / A Proposal on a Remote Recycling System for Small-Sized E-Waste
49
estimation of recycling were carried out in the case of Japan and China. The last column “tele-operation” corresponds to the case in which used products stay in Japan and are operated from China. Although this estimation is too simple, it suggests that the total recycling cost can be greatly reduced when the labor cost is an important cost factor.
Figure 2. Concept of tele-ineverse manufacturing Table 4. Comparison of recycling cost in Japan and China about PC (JPY/unit)
Japan Transportation Labor Electricity Waste disposal Other expense Repair Depreciation Total
China
250 950 130 123 107 300 250 2,110
250 285 14 41 20 60 83 753
Tele-operation 250 285 130 123 107 300 250 1,445
3. Application of Remote Recycling to Small-sized E-waste 3.1. Cost-profit estimation of remote recycling As an analogy to remote recycling of medium-sized e-waste, it is estimated that labor cost can be reduced to 30% by implementing manual disassembly operation in countries where labor cost is relatively inexpensive. Based on the cost estimation shown in Table 4, the labor cost of manual disassembly can be reduced to about 44 JPY. As for mobile phone recycling, another cost estimation has been shown by governmental agency [10]. The result shows the total of the transportation cost and the labor cost in individual stores where used mobile phones are accepted and treated properly to erase personal information, transfer data to new phone and some paper works. The total will be about 72 Yen per unit. Thus, reduced labor cost of mobile phone recycling by applying remote recycling (44 Yen) plus other recycling cost (72 Yen) will be almost same as the average material price (112 Yen). This estimation roughly suggests that mobile phone recycling can be profitable by applying remote recycling.
50
N. Mishima et al. / A Proposal on a Remote Recycling System for Small-Sized E-Waste
3.2 System proposal –recycling siteRemote recycling is based on tele-operation technologies. Usually, the term “teleoperation” let us image an operation using haptic devices. (Figure 3) However, we are proposing another type of tele-operation. Although it will be difficult to replace all the manual operations by human, replacing the separation of roughly crushed particles will be feasible. This paper focuses on remote separation which can be realized by simple manipulations using visual information. (Figure 4) If it can be implemented by affordable cost, separating printed circuit board origin particles, is an efficient way to recover valuable metals from used mobile phones. In separating particles by physical processes, magnetic separation, electro-static separation etc are often applied [12]. In designing industrial feasible recycling process, such separation methods should be combined with remote separation using visual information. Figure 5 shows the schematic block diagram of the conventional recycling process. Of course, the figure only shows a typical and simplified example. There are many different combinations and other separation methods, such as specific gravity separation. Contrarily, Figure 6 shows the process which is being proposed in this paper.
Figure 3. Image of master-salve type tele-operation [11]
Figure 4. Schematic image of remote separation using visual information
N. Mishima et al. / A Proposal on a Remote Recycling System for Small-Sized E-Waste
51
Figure 5. Basic recycling process by physical separation
Figure 6. Proposing recycling process applying tele-operation
3.3 System proposal –tele-operation siteAs well as the recycling sites where used products are set, operation sites are the key factors of the remote recycling system. In the aforementioned operation style which is an analogy to consumer consultation center, operators are usually gathered in a facility, so-called “call center” and take consumers’ phone calls. In this style, very big and complicated facility is not needed. Each operator can watch each monitor and control each recycling line. Actually, in sorting the crushed e-waste to valuable and nonvaluable particles, manual sorting so-called hand-picking is observed sometimes. The fact suggests that sorting of valuable materials only b visual information is possible. Thus, sorting using tele-operation is feasible.
Figure 7. Typical call-center in Philippines [13]
52
N. Mishima et al. / A Proposal on a Remote Recycling System for Small-Sized E-Waste
3.4 System proposal –distributed operationAs it has been mentioned above, a basic system for remote recycling can be imagined by using current technologies. But, recent progress of information technologies and the spread of PCs and smart phones will enable a further interesting system. In recent network society, there are huge labor powers behind the internet. For example, there is a subproject of “Search for Extraterrestrial Intelligence (SETI)” project which is called “SETI@home [14].” Any internet users can participate by running a free program that downloads and analyzes radio telescope data to search an extraterrestrial intelligence. The project is free for participants. It means that the labor powers of participants are free for the organizer, at the same time. So, if an attractive scheme, a social significance and technological set-up can be provided, it will be possible to ask internet users to participate in the remote recycling operations. These are the problems to be solved to implement such system. x Proper and understandable explanation of the social significance of recycling of small-sized e-waste. x Easy-to-use software which can be downloaded from project website. x Attractive scheme to introduce people to the remote recycling operation x Algorithm to translate operations on screen to manipulation commands x Quality assurance system when the separation by network users is insufficient x Method to avoid demand conflict x Etc. Of course, some hardware set-ups are also necessary. x Simple and robust manipulator x High resolution web-camera x Sorting bins x Conveyor to transfer roughly crushed particles to the manipulator x Etc. As for a scheme to motivate network users to participate in the recycling operation, we hereby propose an “online material separation game.” Online games are recently very common in the internet. Many varieties of games can be played via internet. For example, “Tetris online [15]” is a well-known game that tries to sort falling blocks to right places in right directions. It will be possible to develop a game-like software which can sort particles to recycling bins and synchronize the vision, the screen and the practical manipulation.
4. Conclusions Because of the newly started recycling legislation for small-sized e-waste, the paper explained that an economically feasible recycling system will be necessary. Since the new legislation does not require consumers to pay recycling fee, the recycling system must be self-profitable. To reduce the recycling cost due to labor cost, the authors proposed a concept to implement remote recycling operations and named it tele-inverse manufacturing in 2009. This paper proposed to apply the same concept to the recycling of small-sized ewaste. In recycling of small-sized e-waste, the cost issue can be more critical comparing to large and medium-sized e-waste.
N. Mishima et al. / A Proposal on a Remote Recycling System for Small-Sized E-Waste
53
Considering the technological difficulties of tele-operation and practical recycling processes, the paper proposed that the separation of PCB origin particles can be operated remotely, based on visual information. By combining with magnetic separation, electro-static separation etc, the total material separation processes can be automated. As for the remote operations, the paper proposed two types of operations. The one is call center type concentrated operation and the other is online game type distributed operation. Both have possibilities to reduce the total recycling cost greatly. Since this paper only proposed the very basic concept, many research efforts regarding software and hardware development are necessary to implement the system. However, the authors concluded that the remote recycling is a promising way to operate a social system for material recycling of small-sized e-waste efficiently and profitably.
References [1] Ministry of Environment, Ministry of Economy, Trade and Industry, Report of the study group about recovery of the rare metal and proper processing of used small household appliances, (2010). (In Japanese) [2] N. Mishima, K. Mishima, A Study on a Systematic Approach to Manage Used Small Home Appliances, Proc. of Electronic Goes Green 2012, paper 2148 (2011). [3] K. Halada, Material Japan 46, (2007) 543–548. [4] http://www.japanmetaldaily.com/market/ [5] www.meti.go.jp/policy/recycle/main/admin_info/committee./j/04/j04_3-3.pdf (in Japanese) [6] S. Murakami et. al, Average Lifespan of Mobile Phones and in-Use Hibernating Stocks in Japan, Journal of LCA, Vol.5, No.1 (2010) 138-144. [7] M. Matsumoto et. al, Proposal and feasibility assessment of tele-inverse manufacturing, International Journal of Automation Technology, Vol.3, No.1, (2009) 11-18. [8] K. Takahashi et. al, Resource Recovery from Mobile Phone and the Economic and Environmental Impact, J. Japan Inst. Metals, Vol. 73, No. 9 (2009) 747-751. (In Japanese) [9] http://www.comgeo.net/archives/1183, accessed 06/04/14 (In Chinese) [10] http://www.meti.go.jp/press/20100622003/20100622003-2.pdf, acceded 31/03/14. (in Japanese) [11] http://www.tains.tohoku.ac.jp/news/st-news-19/0613.html, accesed 31/03/14. [12] G. Chao et. al, Liberation characteristic and physical separation of printed circuit board (PCB), Waste Management, 31, (2011) 2161-2166. [13] http://www.democracychronicles.com/outsourcing-explosion-having-deep-impact-in-philippines/, accessed 06/04/14. [14] http://setiathome.ssl.berkeley.edu/, accessed 06/04/14. [15] http://www.tetrisfriends.com/, accessed 06/04/14.
54
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-54
A Value Creation Based Business Model for Customized Product Service System Design Yu-Ting CHEN, and Ming-Chuan CHIU1 National Tsing Hua University, Taiwan
Abstract. With the raise of the sustainable production concept and intense global industrial competition, companies’ pressure on retaining profit becomes heavier. Without devoting to reduce the environment impact, company should identify a competitive business model to stand out from others. As a solution, productservice system (PSS) has been proposed. PSS is a concept which attaches importance to balance economic, environmental, and social aspect by adding service into product to elevate the product value and improve the utilization of products. Although PSS seems to be a win-win idea in both customers and suppliers, companies still struggle to develop customized PSS. The purpose of this study is to provide a process which could guide the companies to evaluate and develop their specialized profitable and sustainable PSS business model based on core competence. Also, two umbrella related cases were applied to prove that this methodology could assist company in developing customized PSS business model. The result would be presented in the value interaction model. Keywords. Product-service system, Business model, Value creation
Introduction With the raise of the sustainable production concepts and global industrial competition, companies’ pressure on remaining profit becomes heavier. Product-service system (PSS) was first proposed by Goedkoop et al (1999). It is regarded as a solution to assist companies to overcome these problems. PSS is a system which integrates product, service, and related network. It is a potentially valuable concept of environmental sustainability and value innovation. It devotes to balance economy, environmental, and social aspects by adding service into product to elevate the products’ value and improve the utilization efficiency of the products. Moreover, a fantastic design of PSS might become a high profitable business model. Although there are many benefits from PSS, companies still cannot develop their specialized PSS because they do not understand what the PSS is and what they need to conduct a PSS. Most of related literatures only focused on PSS’s concepts development and model’s performance evaluation. There is a lack of PSS development process to assist companies in constructing customized PSS. The purpose of this study is to 1
Department of Industrial Engineering and Engineering Management National Tsing Hua University Hsinchu, Taiwan, 30013, R.O.C e-mail:
[email protected]
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
55
provide a process which could guide the companies to evaluate and develop their own profitable and sustainable PSS business model step by step. Furthermore, several suggestions on value creation are also provided to inspire companies to create their unique value supply chain. Finally, a visualized PSS value business model would be provided to represent the PSS model.
1. Literature Review 1.1. Product Service System (PSS) The first formal and widely adopted definition of PSS was given in Goedkoop et al (1999). It pointed out that “A product service-system is a system of products, services, networks of “players” and supporting infrastructure that continuously strives to be competitive, satisfy customer needs and have a lower environmental impact than traditional business models”. Also, it defined three key elements of a PSS, namely, products, services, and networks. Besides the definition, various classifications of PSS have been raised to distinguish different PSSs. Generally, PSSs could be divided into three main categories which are product-oriented, use-oriented and result-oriented. (Tischner et al. 2002). In order to explain the PSS model more easily, several representations of PSS models have been developed. Geum and Park (2011) constructed a “product-service blueprint” to represent the PSS model. A product-service blueprint consists of three main areas: product, service, and supporting area. Also, there are symbols used in this blueprint to show the scenario of the system. Lim et al (2012) provided a five rows and nine columns board which could visualize the PSS model. This board could be customized into other decision types to support the users to develop customized PSS model. With this board, users could evaluate the new PSS model through comparing As-Is and To-Be models. Morelli (2006) applied an interaction map to represent the partnership in a PSS. It places more emphasis on the group’s involvement in the PSS system. It can be employed to figure out the different interaction between different groups. Although there are fruitful benefits a PSS would bring to, not all PSS could be profitable. Mont (2002) indicated that PSS may be profitable only in some circumstances. First is that if the costs of use and disposal phases are internalized. Second, the product has a high residual value at disposal stage. Third, the new scenario could generate additional profits. 1.2. Business Model Amit and Zott (2012) provided a definition of business model as “A system of interconnected and interdependent activities that determines the way the company does business with its customers, partners and vendors.” Also, Magretta (2002) pointed out the importance of a business model, and suggested that business models are just stories” which are necessary to explain how firms work, not as a strategy. Besides, a business model should be composed of several elements. Sainio et al (2011) pointed out that business model elements are composed of value creation driver, design elements, and value exchange at three main interfaces (upstream suppliers, downstream sales partners and customers). Johnson et al (2008) thought that a business
56
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
model should include customer value proposition, profit formula, key resources, and key processes. Some rules were provided for users to evaluate the business model. Mason and Spring (2011) first divided business model into three main elements, namely technology, market offering and network architecture, then, each of them has several sub elements. The technology elements include product, core, process, and infrastructure; the market offering involves artifacts, value, activities, and access collects; and the network architecture has transaction, relationship, capabilities and market/standards. For business model design, Morris et al (2005) developed an integrative framework which consists of three levels of decision making and six components in each level. Based on previous studies, this paper would present a strategic framework for conceptualizing a value-based business. And this framework can assist the user to design, describe, categorize, critique, and analyze a business model for any type of company. 1.3. Value Creation In aspects of value creation, Walter et al (2001) verified that value could perceive by the supplier through fulfilling direct and indirect functions. Besides, Vargo et al (2008) introduced the concept of a logical conversion from good-dominant logic to servicedominant logic. This logic is tied to the value-in-use. It concluded that value is always co-created by providers and customers. Prahalad and Ramaswamy (2004) indicated that co-creation is more than co-marketing, and it could co-shape the customers experiences. Makkonen and Komulainen (2014) conducted an empirical research to investigate how perceived value impacts the new service development (NSD) process. Based on above literatures, a phenomenon could be found that these topics have been discussed separately in respective field. Although these topics are belonging to different research areas, parts of these ideas are overlapped. Through Beuren, et al (2013) noted that business model should be considered as part of a PSS, little integrated studies have combined these topics together to create a comprehensive methodology. The purpose of this study is to integrate PSS business model and develop a comprehensive PSS business model.
2. Methodology In this section, a simple evaluation, guidelines framework and visualized PSS value business model would be provided. Core competencies, the supply chain position, and product/service property are adopted to evaluate and classify the company into five types of PSS business model. These PSS business model includes product related service, advice and consultancy, product lease or renting, pay per service unit, and functional result. After identifying the suitable type of the company, some suggestion from exists examples are provided to assist company to construct their own business model and increase the value. Finally, a visualized PSS value business model would be provided to represent what value the company could get from this business model. A more detailed introduction is presented in the following sections.
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
57
2.1. Evaluate Status Quo of the Company In this section, core competencies, the supply chain position, and product/service property are applied to evaluate the company and classify the company into a suitable PSS type. According to Tukker (2004) and considerations of easy distinguish, this study classify the all different PSS model into five types, namely product related service, advice and consultancy, product lease or renting, pay per service unit, and functional result. The detail introduction of these PSS types are shown as following: x x
Product related service: In this type, service is sold together with products, such as after-sales service, warranty and maintenance, supply of consumables, and take-back agreement. Advice and consultancy: Provide the information of use or related information to the user. For example, for a stroller retailers, it could provide the information not only about how to using the stroller but also about how to look after baby.
x
Product lease or renting: This is a common business model in daily life. In this type of PSS, the provider still has the product’s ownership; the only what user buys is the right to use the product. Except for product lease or renting, product sharing and product pooling could also be considered in this type.
x
Pay per service unit: User only pays based on the level of use. The most common example is copy service. When you go into a copy shop, what you need to pay is according to how many pages you have copied.
x
Functional result: In this case, a final result is sold and guaranteed to the customer. A typical case is that GE changes its business model from selling an engine to selling the flight hour.
Moreover, this study collected several exist PSSs and classified their success factors. These success factors are divided into core competencies, the supply chain position, and product/service property respectively. The following section would explain how these success factors affect the PSS developing process. 2.1.1
Analyze Core Competency
Core competency is a capacity which could assist company to enter different market in different conditions and time. It is difficult to be imitated by others, also, it could let customer perceive its value. Here, according to smiling curve which was first introduced by Stan Shih, the founder of Acer, this study distinguishes the core competencies into six types. These core competencies contain R&D, technology, manufacturing, marketing and sales, sales after service, and branding. Different core competencies would lead the different strategies to the company. For example, if a company owns a ‘marketing and sales’ as its core competency, it would have a widespread channels or retailer partners, and have lower barriers to be in touch with the customers. It could have a high chance to conduct ‘product lease or renting’ and ‘pay per service unit’, and company can get profit from these business models.
58
2.1.2
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
Analyze the Supply Chain Position
Different supply chain position has different mission to create value. A comprehensive PSS network requires whole members in supply chain to work together. As an example, for a manufacturing and assembly company, it would have higher ability to control and have more detail information of its products, so it could rent or lease its products directly and guarantee the quality of this service. Besides, it could provide the information to guild the users to use the products. Also, it could repair and maintain its products by itself. Company could choose one or some of prefer types to develop its business model. 2.1.3
Analyze Product/service Property
Different products or services have different limitation to shift to the PSS. Company should find the suitable product type and develop specialized PSS. 2.2. How to Build Customized PSS Business Model The benefit generated from PSS will be introduced and provided to assist users design and conduct their suitable PSS. Also, a visualized PSS value business model will be provided to let the PSS business model easier to be understood. 2.2.1
The Benefit of PSS
The benefit from PSS could be categorized into economic, environmental, and social. Some of these benefits could be quantified by money, but some of them not. Table 1 lists the benefits of each type of PSS. Company could refer to this table, evaluate their requirements and develop customized PSS business model. 2.2.2
The Representation of PSS
To conduct a PSS, an understandable representation to show the model is important. In this section, a visualized PSS value business model is introduced to show the relationship of stakeholders and the benefit which is generated from this PSS business model. The visualized PSS value business model is shown as Figure 1.
3. Case Study Two umbrella cases are shown to demonstrate how to develop a customized PSS business model following this methodology. Since different environment would lead to different scenario of PSS, the case which provided in this paper sets Taiwan as the background to conduct the PSS business model. The summer climate in Taiwan is hot and humid with frequent afternoon thunderstorms. Although it rains heavily, but the duration of afternoon thunderstorm is short. Umbrella is undoubtedly an essential thing for Taiwanese. But it is still a trouble for people to bring it all the day.
59
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
3.1. Industry Analysis In order to find the appropriate PSS type for these two cases, we analyze the whole industry first. These analyses include the core competency, the supply chain position, and the product/service property. In the aspect of companies’ core competency, we found some relationship between competency and five types of PSS. For a company which good at R&D and technology, it is suggested to develop “Product related service” and “Advice and consultancy”. For a company which has advantages of manufacturing, it is recommended to establish “Advice and consultancy” and “Product lease or renting”. And for those good at marketing and sales, we recommend that it can conduct a “Product lease or renting” or “Pay per service unit”. For a company which good at sales after service, it is better to develop “Product related service”, “Advice and consultancy”, and “Functional result”. Finally, for the company which owns enterprise brand, it can do the business of selling the “Functional result”. Table 1. The suggestions and benefit of PSS Product related service
Advice and consultancy
Product lease or renting
Pay per service unit
Functional result
Economic Environmental Social
• Maintenance fees • Supply and sell the consumables
• Consulting fees
• Rent • User fees • Advertising fees (Regarded products as advertising platform)
• User fees • Advertising fees (Regarded products as advertising platform)
• Improve product value and price • Improve consumer loyalty
• Extend the products’ life • Centralized recycling of wastes (Provide take back agreement)
• Extend the products’ life
• Improve the product utilization
• Improve the product utilization
• Use environmenta l friendly material
• High consumer acceptance
• Reduce the barrier of information exchange
• Become a public participation activities • An equity due to user fees
• An equity due to user fees
• Form a good enterprise image • A health and safety promise
The analyses of the supply chain position and the product or service property are shown as Table 2 and Table 3. Firstly, we take an umbrella manufacturing company as example. A Company devotes to produce the functional umbrella, such as windproof umbrella, quick-drying umbrella, lightweight umbrella, and special form umbrella. It holds lots of R&D, technology and manufacturing knowhow. For consumers, it is a reliable brand when considering buying an umbrella. Also, its umbrellas are more expensive than others. After evaluating status quo of A Company, the suitable types of A Company are ‘Product related service’ and ‘Advice and consultancy’.
60
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
Table 2. Analyze the supply chain position Product related service
Advice and consultancy
Product lease or renting
Pay per service unit
Functional result
Raw material refining
-
-
-
-
Guarantee the quality
Components manufacturing
Repair the components
Provide the information
-
-
Guarantee the quality
Manufacturing and assembly
Repair the products
Provides the information
-
Guarantee the quality
Logistics
deliver the products
-
-
Guarantee the quality
Channel
Provides the platform
Provides the information
Provides the platform for rental or leasing
Provides the platform for customer to use
Online download
Waste Treatment and Disposal
Process the waste properly
-
-
-
green tack back agreement
Provides rental or leasing service directly deliver the products
Table 3. Analyze the product/service property Product related service
Product / service property
• Repair and maintenance is not easy • Products’ life could be extended through repair • Consumable s are needed
Advice and consultancy
Product lease or renting
Pay per service unit
Functional result
• Product is difficult to use • Some professional knowledge is related to the products
• The price is high • Frequency of use is not frequent • Repair and maintenance is not easy • With timeliness • The ownership is belong to the providers
• Consumabl es are needed (ex. inks) • The price is high • Frequency of use is not frequent
• Unmet needs of consumers • Customer can still get the result without using product • Provider still holds the use rights
3.2. A Company: An Umbrella Manufacturing Company It could provide customers a warranty of functional umbrella, and customized services. Also, customers could bring the old or broken umbrella back to recycle, and A Company could give them a discount to buy new products. This way could improve the recycle rate of the broken product, and the company could reuse these materials to reduce the pollution during the resource extraction. Through the warranty, products’ life could be extended. And with the centralized recycling of wastes, the impact to the environment could be reduced. Besides, the discount could improve the customers’ purchase intentions to new products. It could create a material circulation and a good customer relationship, and lead to a sustainable network for both economic and environmental side. On the other hand, some R&D and manufacturing knowhow could be sold to other company. The visualized PSS business model is shown in Figure 2.
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
61
Figure 1. The visualized PSS value business model
Figure 2. The PSS business model of A Company
3.3. B Company: A Convenience Store Another company is a convenience store in Taiwan. In addition to daily commodities, this convenience store also sells umbrellas. And the core competency of B Company is marketing and sales. Its umbrella is cheap, and customers usually buy it when raining suddenly and throw it immediately when it was broken. The PSS suggestion which provided to this company is ‘Product lease or renting’. Customers could rent its umbrellas at the store when raining suddenly and return the umbrellas when the rain stops. This way not only improves the product utilization but also extends the products’
62
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
life. Moreover, it could have a chance to become a public participation. For B Company, it could also put some advertisement on the renting umbrella and earn the advertising fees. The visualized value chain of B Company is shown in Figure 3. On the other hand, because the new PSS business model is totally different from original model, B Company must to change its business model. Without the umbrellas for selling, it requires to prepare more products for renting. Also, the flow of products should be forecast and record to prevent the condition that there are no umbrellas for customers to rent. Other supporting measures should be put into consideration: How to rent the product? How much is the rent? How many products should be prepared for renting? If this business successfully hit the customers need, it could attract the advertisers to invest, so that B Company could get other profits. For environmental side, it could not only increase the usage rate of the products but also reduce the redundant product if it runs this business in an efficient way.
Figure 3. The value chain of B Company
3.4. Discussion From these two cases, some phenomenon could be found. The first observation is that different companies would have different PSS business model. Although both A and B companies are selling the umbrellas, with this methodology, they could build totally different PSS business model. Secondly, in order to accomplish some PSS types such as product lease or renting, pay per service unit, or functional result, some basic service may be self-absorbed by the company itself. Thirdly, conducting PSS is based on local conditions. Different environments would lead to different scenarios of PSS. Hence, companies should identify their own strength and build a suitable PSS business model instead of copying existent successful examples. In addition, from the case of A company, we could found that the PSS business model is additive. Company could select one or more PSS types to conduct and build their own suitable business models concurrently.
Y.-T. Chen and M.-C. Chiu / A Value Creation Based Business Model
63
4. Conclusion PSS provided a solution for companies to attain sustainability in economic, environmental, and social perspectives. With the method, companies could develop their customized PSS business model and build their sustainable network. Moreover, companies could enhance profit as well as improve their competitiveness by exploring new market segments. A value flow diagram would show on the visualized PSS business model and it could assist companies to understand what benefit they could while promoting PSS. Future research will focus on expanding the case of exist PSSs, and building a case-based reasoning system. Also, all possible success factors will be investigated into detail and reinforce the relationship between these success factors and different PSS models.
References [1] Amit, R., & Zott, C., Creating value through business model innovation, MIT Sloan Management Review 53(3) (2012), 41-49. [2] Beuren, F. H., Gomes Ferreira, M. G., and Cauchick Miguel, P. A. Product-service systems: a literature review on integrated products and services, Journal of Cleaner Production 47 (2013), 222-231. [3] Geum, Y., & Park, Y., Designing the sustainable product-service integration: a product-service blueprint approach, Journal of Cleaner Production 19(14) (2011), 1601-1614. [4] Goedkoop M., van Haler C., te Riele H., & Rommer, P., Product service systems, ecological and economic basics. Ministry of Housing, Spatial Planning and the Environment, Communications Directorate, (1999). [5] Johnson, M. W., Christensen, C. M., & Kagermann, H., Reinventing your business model, Harvard business review 86(12) (2008), 57-68. [6] Lim, C. H., Kim, K. J., Hong, Y. S., & Park, K.., PSS Board: a structured tool for product-service system process visualization, Journal of Cleaner Production 37(2012), 42-53. [7] Magretta, J., Why Business Models Matter. Harvard Business Review, 6 (2002), 86-92. [8] Makkonen, H. S., & Komulainen, H.. Networked New Service Development Process A Participant Value Perspective. Management Decision 52(1) (2014), 2-2. [9] Mason, K., and Spring, M., “The sites and practices of business models”. Industrial Marketing Management 40(6) (2001), 1032-1041. [10] Mont, O. K., Clarifying the concept of product–service system, Journal of cleaner production 10(3) (2002), 237-245. [11] Morelli, N., Developing new product service systems (PSS): methodologies and operational tools. Journal of Cleaner Production 14(17) (2006), 1495-1501. [12] Morris, M., Schindehutte, M., & Allen, J., The entrepreneur's business model: toward a unified perspective, Journal of business research 58(6) (2005), 726-735. [13] Prahalad, C. K., & Ramaswamy, V., Co Ʈ creation experiences: The next practice in value creation. Journal of interactive marketing 18(3) (2004), 5-14. [14] Sainio, L. M., Saarenketo, S., Nummela, N., & Eriksson, T., “Value creation of an internationalizing entrepreneurial firm: the business model perspective,” Journal of Small Business and Enterprise Development 18(3) (2011), 556-570. [15] Tischner U., Verkuijl M., Tukker A. First draft PSS review, SusProNet Report 15, 2002. [16] Tukker, A., Eight types of product–service system: eight ways to sustainability? Experiences from SusProNet, Business strategy and the environment 13(4) (2004), 246-260. [17] Vargo, S. L., Maglio, P. P., & Akaka, M. A., On value and value co-creation: A service systems and service logic perspective, European management journal 26(3) (2008), 145-152. [18] Walter, A., Ritter, T., & Gemünden, H. G., Value creation in buyer–seller relationships: Theoretical considerations and empirical results from a supplier's perspective,” Industrial Marketing Management 30(4) (2001), 365-377. [19] M. Peruzzini, M. Germani, Design for sustainability of product-service systems, Int. J. Agile Systems and Management, Vol 7, No 3 (2014) in press.
64
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-64
Composite Aircraft Components Maintenance Cost Analysis Xiaojia ZHAOa,1 , Massoud URDU a , Wim J.C. VERHAGENa and Richard CURRANa a Delft University of Technology, Kluyverweg 1, Delft, The Netherlands, 2629 HS
Abstract. Maintenance cost of composite aircraft components are generally estimated based on the actual maintenance practices. The estimate is only available for the operating group while not for the design and manufacturing engineers. Moreover, the influence of component attributes are not considered, missing the link from component to maintenance task scheduling, till cost estimation. In this paper, a detailed maintenance cost estimation method is presented. Rules related to component maintenance are extracted to simulate the rationale behind the task scheduling process. Analyses based on a set of maintenance intervals and statistical maintenance times are carried out to determine maintenance cost. In order to identify the influence of composite material, maintenance labor cost for composite component is highlighted in particular. A study case is applied on the A330 composite rudder. The result shows that composite maintenance has a major influence on the overhaul of aircraft component. This research illustrates the capability to perform maintenance cost estimation by linking the component design to the maintenance operations. Assisted by the knowledge based engineering techniques and genetic-causal cost modeling, the influences of subassembly design to life cycle implications are identified. Keywords. Maintenance program generation, maintenance cost estimation
Introduction Started from 1968, the maintenance process is evolved from the Maintenance Steering Group (MSG)-1 to the MSG-2 in 1970, till the MSG-3 for current use. The MSG-3 is accepted by the airworthiness authorities, the commercial airplane manufacturers and most of the major business manufacturers [1, 2]. Currently, MSG-3 is employed as a standard to determine the essential scheduled maintenance for new airplanes. MSG-3 is based on a rigorous knowledge based decision tree analysis concerning the failures of the parts to the failures of the aircraft system [3]. Along with the evolution of maintenance programming, maintenance cost estimation models are developed accordingly, ranging from Liebeck’s maintenance cost model based on airframe weight, engine thrust and trip time to Dhillon’s maintenance cost estimation based on components’ reliabilities [4, 5]. Although airlines adopt the flexible MSG-3 program, the logic behind the maintenance planning mostly relies on part failures and operating rules. Therefore, to perform maintenance cost estimation during the design phase, automation can be applied on the maintenance scheduling for designers. Furthermore, current cost estimation mainly emphasizes the total maintenance cost for the airline, whereas the 1
Corresponding Author.
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
65
maintenance cost of each component or system, especially component cost for those of composite materials, is not focused. Moreover, the cost distributed on each maintenance task is not identified. This leads to less understanding of the relation between a component’s attributes and its maintenance cost, which is seen as the characteristic of the disconnect between design and operations. This research aims to build a maintenance cost model that links the aircraft design parameters and the operating parameters with the maintenance scheduling process, and eventually to the maintenance cost estimation. A methodology of detailed maintenance cost analysis for aircraft component is presented. By adding the capability of maintenance scheduling and cost analysis based on Knowledge Based Engineering (KBE) techniques, it enables rule/knowledge extraction, analysis process automation and acceleration.
1. Methodology The method is established on the basis of Knowledge Based Engineering (KBE) techniques and Genetic-causal cost modeling approach. KBE emphasizes the automation of repetitive activities typically encountered in the product development process, involving KBE techniques such as knowledge extraction, formalization and reuse [6]. Similar with component design, operating processes like aircraft maintenances are also repetitive, rule-based activities, where the KBE techniques are adopted. The component breakdown as well as the maintenance program generation for maintenance cost estimation are elaborated in sections 1.1 and 1.2. The maintenance cost estimation, especially the scheduled maintenance labor cost estimation method, is followed in section 1.3. Genetic-causal cost modeling is employed for the analysis. This modeling approach stresses the causality between the cost driving parameters and its induced cost, therefore, it focuses on connecting the product itself and the relevant cost [7]. Since the natural causes of the maintenance cost is actually the tasks performed on each maintenance item based on their failure conditions, the product design and maintenance cost are associated when the Genetic-causal approach is applied. With the assistant of KBE and Genetic-causal cost modeling, this research is able to link the product sub-assembly design and its life cycle effect in terms of maintenance cost. 1.1. Component Breakdown for Maintenance The component breakdown is required for the scheduling of the maintenance process, since the tasks are applied due to the item failures. An aircraft is divided into numbered zones by the manufacturers according to the standard ATA iSpec 2200 [8]. An example of aircraft zones is shown in Figure 1. A component is covered by one or more zones, which can be further divided into functional parts, connections and relevant systems, see Figure 2. At least one functional part is located in one zone. A part is a generalization of the main structure such as skin, spar, rib and the miscellaneous part including fastener, fitting and attachment. A connection mainly refers to the interface between two or more parts. Relevant system represents the hydraulic system or the electrical system, which keeps the component functioning properly. Large parts and complex systems are distributed in one or more zones. Then the maintenance tasks are scheduled for each item allocated in different zones.
66
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
Figure 1. Example of aircraft zones [9].
Figure 2. Component class diagram.
1.2. Maintenance Program Generation 1.2.1. Maintenance steering Group-3 The summarized MSG-3 maintenance program is shown in Figure 3. Its objective is to produce scheduled maintenance tasks performed by the Maintenance Working Groups (MWG). The causality of the maintenance program is based on the part/component’s function, its failure modes, the failure effects, and the failure causes [10]. MSG-3 considers three maintenance program groups: systems & powerplant maintenance program, structures maintenance program and zonal maintenance program. The system & powerplant group provides maintenance program for aircraft systems and engines. The structural group focuses on the maintenance program of airframes. The zonal inspection group deals with maintenance program for items in each pre-divided zone area. Depending on the safety, operational and economic aspects of failures, maintenance tasks with specific actions, intervals and durations are assigned. The tasks are listed in a sequence considering its difficulty and cost from lower level to higher level.
Figure 3. MSG-3 maintenance program (Summarize according to [1,2,11-13]).
1.2.2. Maintenance task scheduling 1) Predict the number of times each maintenance task is performed According to the task thresholds (the deadline for the first maintenance) and intervals, the number of times ݊ the maintenance task ݅ performed in a Fiscal Year (FY) can be predicted. Extracted rules for planning are incorporated. For the maintenance task with a threshold value and a check interval, Eq.(1) is applied. ª FH post º ª FH pre º floor « ° » − floor « » ( FH pre < FH post < threshold i ) ¬ intervali ¼ ¬ intervali ¼ ° ° ª FH post − threshold i º ° 1 + floor « ni = ® » ( FH pre < threshold i ≤ FH post ) intervali ¬ ¼ ° ° FH post − threshold i º FH pre − threshold i º ª ª ° floor « » ( threshold i ≤ FH pre < FH post ) » − floor « °¯ intervali intervali ¬ ¼ ¬ ¼
(1)
67
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
where, ݂݈ݎሾሿ rounds down to the nearest integer. ݅ represents the maintenance task ݅, ݅ ൌ ͳǡʹǡ͵ǡ ǥ ǡ ݇. ݊ is the number of times the maintenance task ݅ is performed in a FY. ܪܨis the aircraft flight hours in a FY, where ܪܨൌ ܪܨ௦௧ െ ܪܨ . ܪܨ௦௧ is the cumulative aircraft flight hours since the aircraft is new after the end of a FY (equal to the average fleet age in this research). ܪܨ is the cumulative aircraft flight hours since the aircraft is new before the start of a FY. ݈݄݀ݏ݁ݎ݄ݐ stands for the threshold interval for the maintenance task ݅ . ݈݅݊ܽݒݎ݁ݐ is the maintenance interval for the maintenance task ݅. For the maintenance task with a threshold value and two check intervals, the interval expiring first shall apply. It is interpreted in Eq. (2). Both intervals are considered. The number of times to perform the maintenance task ݅ is the summation of the number of times based on each maintenance intervals after eliminating the duplicated operations.
ni = ni ,1 + ni ,2 − ni , duplicate
(2)
where, ݊ǡଵ is the number of times the maintenance task ݅ is performed in a FY based on ݈݅݊ܽݒݎ݁ݐǡଵ . ݊ǡଶ is the number of times the maintenance task ݅ is performed in a FY based on ݈݅݊ܽݒݎ݁ݐǡଶ (referencing Eq.(1) for the calculation of ݊ǡଵ ,݊ǡଶ ). ݊ௗ௨௧ refers to the number of duplicated operations when applying both intervals. When ݈ǡଵ ൈ ݈݅݊ܽݒݎ݁ݐǡଵ ൌ ݈ǡଶ ൈ ݈݅݊ܽݒݎ݁ݐǡଶ , then ݊ǡௗ௨௧ ൌ ݊ǡௗ௨௧ ͳ , where ݈ǡଵ א൬݂݈ ݎ
ிுೝ
௧௩ǡభ
൨ ǡ ݂݈ ݎ
ிுೞ
௧௩ǡభ
൨൰, ݈ǡଶ א൬݂݈ ݎ
ிுೞ
௧௩ǡమ
൨ ǡ ݂݈ ݎ
ிுೞ
௧௩ǡమ
൨൰.
During the calculation, the unit of the time variables should be consistent in hours (݄ )ݎor years (ܻ)ܧ. 2) Allocate maintenance task to maintenance packages The most commonly used work packages are A-check, C-check and D-check. Table 1 shows the letter check descriptions, the intervals and the durations, which formulates the rules of allocation. Table 1. Maintenance letter checks (adapted according to [2,14,15]). Check
Description A General inspection of the interior / exterior of the airplane with selected area opened, example tasks: LU / SV, OP / VC C The whole aircraft is inspected: structural inspection of airframe, opening access panels. Example tasks: LU / SV, OP / VC, FC / IN* D Major structural items are inspected: paint, exterior components, interior and equipment are removed. Example tasks: FC / IN*, RS, DS
Duration Interval 24 biweekly to man-hours monthly /500800 FHs/ 200400flight cycles Up to 6000 15 to 21 months man-hours/ 3 days to 1week 6 to 12 years Up to 50,000 man-hours/ 1month to2 months
Location At gate/ Hanger
Operation In service
Hanger Out of Service
Hanger Out of service
In addition, rules for other types of work package classifications are also extracted. According to the task function, preventive maintenance and corrective maintenance are classified [3]: IF the task is “departure-oriented”, the task interval is on transit, daily, weekly to monthly basis, and the task is performed from 1 to 24 man hours, THEN it is line maintenance. Examples of line maintenance tasks are LU / SV, OP / VC. It covers transit check and some tasks from A check. IF the task is “fix-oriented”, THEN it is
68
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
base maintenance. Examples of base maintenance tasks are FC / IN*, RS, DS. It covers some tasks from A check, C check and D check. According to the place of maintenance, it is grouped into preventive maintenance and corrective maintenance: IF the task is applied to non-repairable item, THEN it is allocated to preventive maintenance package. Examples of preventive maintenance tasks are LU / SV, OP / VC, FC / IN*. It covers transit check, A check, C check and some tasks from D check. IF the task is applied to repairable item, THEN it is allocated to corrective maintenance package. Examples of corrective maintenance tasks are RS, DS. It covers some tasks from D check. 1.3. Maintenance cost estimation 1.3.1. Maintenance cost driving parameters The cost driving parameters are divided in two groups: operation relevant parameters and design relevant parameters. The former refers to the parameters on the airline information level, involving fleet type, average active fleet size, fleet/aircraft(AC) Flight Hour (FH) in a FY, fleet/AC Flight Cycle (FC) in a FY, Average fleet age. Besides, the average labor rate for maintenance activities is incorporated in the labor cost estimation. The inventory material purchase price, interest rate, storage facility cost, etc., are adopted for material cost estimation. Moreover, financial factors of currency exchange rate between local currency and report currency is included. Design relevant parameters are detailed to each of maintenance items and their correspondent maintenance tasks, including the geometry, part type, material. Those parameters influence the labor time usage, which is the intermediate cost driving parameters for labor cost. 1.3.2. Maintenance cost breakdown Total Maintenance Cost (ܶ )ܥܯis broken down into Direct Maintenance Cost ()ܥܯܦ and Indirect Maintenance Cost ([ ) ܥܯܫ16, 17]. DMC refers to the cost generated directly associated with the maintenance operations. It mainly includes scheduled maintenance cost and unscheduled maintenance cost. The scheduled/unscheduled maintenance cost is the aggregation of the cost for each scheduled/unscheduled maintenance task, which is further divided into labor cost and material cost. IMC is comprised of tooling & equipment cost, spare & inventory material cost and overhead cost, see Figure 4 and Eqs. (3) to (7).
Figure 4. Maintenance cost breakdown.
TMC = DMC + IMC
(3)
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
69
DMC = Cscheduled + Cunscheduled
(4)
IMC = Ctooling &equipment + Coverhead + Cspare &inventory
(5)
Cscheduled = Clabor , scheduled + Cmaterial , scheduled
(6)
Cunscheduled = Clabor ,unscheduled + Cmaterial ,unscheduled
(7)
1.3.3. Maintenance cost estimation model Under the composition of TMC, a model for labor cost estimation is developed. Based on the maintenance task planning from section 1.2, the scheduled maintenance labor cost is evaluated by two types of cost performance indices. 1) Actual labor cost for a component of an aircraft from a fleet in a FY, see Eq.(8). k
C AC , labor = ¦(rlabor ,i × ni × MTi × ni ,labor )
(8)
i =1
where, ݎǡ is the labor rate for maintenance task ݅, i.e. maintenance cost per hour (̀Ȁ݄)ݎ. ܶܯ refers to the maintenance time required to repair an item by performing maintenance task ݅ (݄)ݎ. ݊ǡ is the number of labor forces for maintenance task ݅. 2) Mean labor cost for a general maintenance task applied to a component of an aircraft from a certain fleet in a FY, see Eq. (9) to (11) (adapted from[5]). FH × rlabor × MTTR (9) Ctask , labor , mean = MTBF § k · k (10) MTTR = ¨ ¦λi × ni × MTi × ni ,labor ¸ / ¦λi ni © i =1 ¹ i =1 k
k
k
k
i =1
i =1
i =1
i =1
MTBF = ¦λi × ni × intervali / ¦λi ni = ¦ni / ¦λi ni
(11)
where, ܴܶܶܯis mean time to repair, meaning the average repair time including the ଵ refers to the failure rate of influence of the failure rate for a component. ߣ ൌ ௧௩
an maintenance item, which can be repaired by maintenance task ݅ . It is relevant to the reliability of an maintenance item. ܨܤܶܯis mean time between failures, the average time interval including the influence of the failure rate for a component. ܴܶܶܯ, ߣ and ܨܤܶܯare applicable to corrective maintenance package, when the task is allocated to the preventive maintenance package, the position of the three parameters in Eqs. (9) to (11) will be replaced by mean preventive maintenance time () ܶܯܲܯ, frequency of task (݂ ) and mean time to failures ( ) ܨܶܶܯcorrespondingly. In this research, ܶܯ and ݈݅݊ܽݒݎ݁ݐ are based on a set of statistical data from the Maintenance Planning Document.
2. Case study-A330 rudder maintenance labor cost A330 is a wide-body, twin-engine aircraft type known by its low operating cost for long-haul operations. Around 11% composite material has been used, resulting in more than 10 tonnes of light weight composite airframe structure [18]. The A330-200 rudder, as a typical composite component from an A330 shorter fuselage variant, is chosen for this case study.
70
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
2.1. Operation and design properties By considering KLM A330-200 fleet condition[19] and IATA summary of world A330-200 fleet [20], the operation parameters are listed in Table 2. Average labor rate is assumed to be ͶʹǤͷ̀Ȁ݄( ݎFY2013 Euro) [21]. A330-200 rudder is made from composite sandwich structure, the rudder material distribution and rudder structure are shown in Table 3 and Figure 5 respectively. Table 2. Airline operation parameters. Fleet AC type
Airline
AC No.
Avg Age (YE)
FH/FC ratio
A330-200
KLM
12
5.9
4.3
Daily Utilization ࡲࡴ ( ) ࡺǤ 11.2
FH/AC (hr)
FC/AC (hr)
4088
951
Table 3. Rudder material distribution. Part Side
Inner skin Core
Shell Outer skin Spar Rib Hinge, actuator
Material CFRP
Remark LH & RH
Nomex Honeycomb GFRP
LH & RH LH & RH
CFRP CFRP Noncomposite
front Lower &upper hinge/ actuator arm, fitting
Figure 5. Rudder structure [22].
2.2. Rudder breakdown and maintenance program generation
Figure 6. Rudder breakdown structure and maintenance tasks
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
71
According to the zonal division of A330-200, the rudder is covered by one physical zone, its relevant systems are distributed in various zones from cockpit, fuselage belly to tail [9]. The component breakdown is shown in Figure 6. Maintenance tasks are assigned to each item based on rules for maintenance program planning. A maintenance task is identified with an unique task number, a task interval and a maintenance time. Task intervals shown as A, C, 2C, 4C, 8C match 600 FH, 18 month, 36 month, 6 year and 10 year operating time respectively [23]. Maintenance times followed after the slash symbol are shown in the unit of man hours. 2.3. Estimation results Figure 7 to Figure 11 illustrate the results obtained from the rudder scheduled maintenance labor cost estimation. The cost is presented as yearly cost per aircraft from A320-200 fleet. The calculation is based on the high level operation parameters as well as the detailed level task interval and maintenance time resulted from design itself. The cost indices shown in Figures 7, 9 to 12 are referenced to the actual labor cost of the rudder according to section 1.3.3-1), which evaluates the scheduled maintenance cost in each FY based on the rudder breakdown and maintenance task planning and allocation shown in Figure 6. The cost index of Figure 8 is calculated based on the mean labor cost of a general maintenance task according to section 1.3.3-2), which estimates the cost by considering the impact of failure rates (or reliabilities) corresponding to each task. From FY 2012 to FY2022, it is seen from Figure 7 that the cost increases around 26% and 27% in FY2013 and FY2019 compared with the total expense. This predicts the years when the overhauls are taking place. The trend can be seen in the cumulative curve from Figure 7 and in the bar charts from Figures 9 to 12 correspondingly. According to Figure 8 the average cost for a general maintenance task is fluctuating during the period between 94̀Ȁ ݇ݏܽݐto 155̀Ȁ( ݇ݏܽݐFY2013 Euro). It is shown that the mean labor task cost is not influenced by the overhaul but the failure rate (or reliability) of the maintenance items. The composite materials and structural parts maintenance cost are emphasized in this paper. Figure 9 illustrates that the composite structures take a relative small share of the maintenance cost in general, the expenses should be focused during the overhaul period. The composite structures including spar, rib and skin are mostly checked and repaired during overhaul, around 37% in both FY2013 and FY2019 (Figure 10). Correspondingly, in the heavy maintenance period, the maintenance tasks such as DI and GVI, allocated in structure program group and zonal program, are spent nearly 90% of the yearly cost (Figure 11 and Figure 12).
Figure 7. Scheduled maintenance labor cost
Figure 8. Mean labor cost per maintenance task.
72
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
Figure 9. Scheduled maintenance labor cost by material types.
Figure 10. Scheduled maintenance labor cost by part types.
Figure 11. Scheduled maintenance labor cost by program groups.
Figure 12. Scheduled maintenance labor cost by task types.
3. Conclusions and future work This research focused on maintenance cost estimation of composite components. It outlined the cost estimation methodology, which uses a component breakdown structure and maintenance program planning procedures to perform the cost calculation. Based on KBE techniques and Genetic-causal cost modeling approach, this method is able to link the product sub-assembly design and its life cycle effect from maintenance cost perspective. Scheduled maintenance labor cost was emphasized and presented in the A330-200 rudder case. Repetitive maintenance program rules were extracted for task planning and maintenance package allocations. Comparing to current estimates, the presented method drives the estimation to a more detailed level. It developed a thorough maintenance cost analysis relating structural parts and maintenance tasks. Task numbers and maintenance times generated according to part failure conditions were employed for the calculation. Both the operation and design influences were distinguished and included. This is reflected from the case study, where a detailed cost distribution by material, by part and by maintenance tasks could be made available for both airlines and original equipment manufacturers. Although the methodology is developed, it is necessary to build an application implementing and automating the entire estimation process. In order to capture the
X. Zhao et al. / Composite Aircraft Components Maintenance Cost Analysis
73
causality between product design and labor time, it is desired to build the parameterized estimation relationship to predict maintenance times and intervals of each maintenance task based on part properties such as geometry and material type. Detailed material cost estimation should be constructed. The cost influence of aging factors, unscheduled maintenance cost should be further included in the model.
References [1] FAA, “Advisory Circular- Maintenance Review Board Procedures”, AC 121-22A,1997. [2] Ackert, P.S., “Basics of Aircraft Maintenance Programs for Financiers”, 2010, URL:
http://www.aircraftmonitor.com/uploads/1/5/9/9/15993320/basics_of_aircraft_maintenance_prog rams_for_financiers___v1.pdf. [3] Ghobbar, A. “Aircraft Maintenance Engineering”, Encyclopedia of Aerospace Engineering, John Wiley & Sons, pp1 -14, 2010 [4] Liebeck, R.H., et al, “Advanced Subsonic Airplane Design and Economic Studies”, NASA report,1995, pp15,16. [5] Dhillon, B.S., “Engineering Maintenance: a Modern Approach ”, CPC press, 2002, ISBN 1-58716-142-7. [6] La Rocca, G., “Knowledge Based Engineering Techniques to Support Aircraft Design and Optimization”.2011, PhD thesis, TU Delft. [7] Curran R, Raghunathan S, et al, “Review of aerospace engineering cost modeling: The genetic causal approach”, Progress in Aerospace Sciences, 40(8): 487-534, 2004, doi:10.1016/j.paerosci.2004.20.001. [8] ATA, “iSpec 2200: Information Standards for Aviation Maintenance”, 2012, URL:
https://publications.airlines.org/CommerceProductDetail.aspx?Product=154. [9] Airbus, “Maintenance Review Board Report Airbus A330”, REV 11, France, issue: 18 June 2008. [10] George, R. “Profit Strategies for Air Transportation”, pp343, McGraw-Hill, ISBN, 0-070138505-3, 2002. [11] Ahmadi, A., Soderholm, P., and Kumar, U. “Reviews and Case Studies On Aircraft Scheduled Maintenance Program Development”, Journal of Quality in Maintenance Engineering, Vol. 16 No. 3, 2010, 1355-2511. doi: 10.1108/13552511011072899. [12] FAA, “Aviation Maintenance Technician Handbook –Airframe”, Volume 1, U.S. Department of Transportation, FAA-H-8083.31, 2012. [13] Santos, L., “EMBRAER Perspective on the Challenges for the Introduction of Scheduled SHM (SSHM) applications in to Commercial Aviation maintenance Programs”. Key Engineering Materials, Vol. 558, 2013, pp 323-330. doi:10.4028/www.scientific.net/KEM.558.323. [14] FAA, “Aviation Maintenance Technician Handbook General”, U.S. Department of Transportation, FAA-H-8083.30, 2008. [15] Cook, A., Tanner, G., Williams, V. and Meise, G., Dec, “Dynamic Cost Indexing” 6th Eurocontrol Innovative Research Workshops & Exibition, France, 2007. [16] Wu, J., Zuo, H. and Chen, Y. Oct, 2005, “An Estimation Method for Direct Maintenance Cost of Aircraft Component Based on Particle Swarm Optimization with Immunity Algorithm”, Journal of Central South University of Technology, Vol. 12, No. 2, pp. 95-101. [17] Curran, R. Frank, M. Alex, O. and Stefaan, G., “Value Analysis of Engine Maintenance Scheduling relative to Fuel Burn and Minimal Operating costs”, 10th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference, AIAA, Texas, 2010, doi: 10.2514/6.2010-9451. [18] Airbus website, A330 family, URL: http://www.airbus.com/aircraftfamilies/passengeraircraft/a330family [cited April 2014]. [19] Airfleet.net, KLM fleet details, URL: http://www.airfleets.net/flottecie/KLM.htm [cited April 2014]. [20] IATA, Airline Maintenance Cost Executive Commentary, An Exclusive Benchmark Analysis (FY2012 data) by IATA’s Maintenance Cost Task Force, IATA report, 2013. [21] IATA, Labor Rate and Productivity Calculation for Commercial Aircraft Maintenance, IATA report, 2013. [22] Tissenier, A. Rudder inspections & Airbus In-Service Support, A4A Annual NDT Forum, Seattle, Sept., 2012,URL: http://www.airlines.org/Pages/2012-NDT-Forum-Presentations.aspx. [23] Airbus, “A330 Maintenance Planning Document (MPD)”, REV 14, France, issue: aug 15/2006, reference SE6/955.2600/93.
74
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-74
Assessing the Requirements and Viability of Distributed Electric Vehicle Supply a
John P.T. MOa,1 RMIT University, Australia
Abstract. Climate change has been a serious debate topic in almost every country in the world. The need to reduce carbon emission is acknowledged universally irrespective of which side of the climate debate. The development of future electric vehicles will have significant benefits on the environment if this vehicle can be adopted by the general public as the primary mode of transport. However, the existing infrastructure for manufacturing and supporting fossil fuel based vehicles does not fit the operation model of electric vehicle. The industry, supply chain, community and general public need to understand how the new electric vehicle operation system works and what benefits it will bring to them. This paper examines the manufacturing and operational issues of electric vehicles and explores the view of a new global supply chain that will foster design, develop, manufacture and support of electric vehicles. A business plan is proposed to describe the rationale of how to create, deliver, and capture value in terms of economic, social and innovation for supplying electric vehicles to the community. Keywords. Climate change, Electric vehicles, Operation model, Global supply chain
Introduction Since Ford Model T was introduced in 1908 as an affordable means of transportation, the number of automobiles has risen to over 1 billion in 2010 [1]. Automobiles today are mostly propelled by an internal combustion engine that is fueled by petrol or diesel [2]. As both of these fuels contribute carbon dioxide to the atmosphere from the burning process, automobiles are blamed to cause climate change and global warming. Climate change brings about significant and lasting effect in the distribution of weather patterns while global warming causes the average temperature of Earth’s atmosphere and oceans to increase. Both effects have negative impacts on the ecological, environmental, economic and social factor of the world. Furthermore, rapidly increasing oil prices, concerns on oil dependence, tightening environmental laws and restrictions on greenhouse gas emissions are pushing for alternative power systems for automobiles. An electric vehicle (EV) is driven by electric motors, it has the advantages that the motor torque generation is fast and accurate, braking energy can be recovered and it does not emit carbon dixode in operation [3]. Since then, research in EV focused n ecological designs [4]. However, although many agreed that one of the main benefits of electric vehicle is its environmental friendliness as compared to the fuel variant of 1
Corresponding Author. Email:
[email protected]
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
75
vehicle [5], several other studies showed that due to driving style, terrain configuration, meteorological conditions, carbon emission from EV is practically at same level if it was recharged from coal-fired power sources [6], unless it is recharged from cleaner forms of electricity such as hydro and nuclear power. EVs run far more quietly than their combustion-powered counterparts which enhances passenger’s health and comfort. Electric vehicles will have significant benefits on the environment if they are be adopted by the general public as the primary mode of transport. However, electric vehicles are still not commonly used. The existing infrastructure for manufacturing, selling and supporting fossil fuel based vehicles does not fit the operation model of electric vehicle. This paper aims to develop a strategy to revolutionize the supply chain of electric vehicle in the same way as Ford Model T to empower common people with an environmentally friendly means of transportation.
1. Literature Review The major components for an electric vehicle system are the motor, controller, power supply, charger and drive train [7]. Figure 1 shows a system model for an electric vehicle.
Figure 1. Major components of an electric vehicle.
The control of Electric Vehicle (EV) is complex because its operation varies according to operation parameters and road conditions. Thus, the controller is required to be robust and adaptive with the ability to keep both dynamic and steady state performances. Cheng et al [8] asserts the control of EV is unique and energy efficient. EVs have advantages over traditional vehicles utilising combustion engine from the viewpoint of electric and control engineering. According to Sakai and Hori [9], electric motor’s performance in torque generation is quick and accurate and therefore allows quicker and more precise control. The output torque is easily comprehensible and motors are small enough to be attached to each wheel. The control can be easily designed and implemented with comparatively low cost. Tesla Motors, Inc. is a company that designs, manufactures and sells electric cars and electric vehicle powertrain components [10]. The company has positioned itself as a superior luxury vehicle with better acceleration, quieter interior and slick appearance. The manufacturing, sales and support for electric vehicles will require proper establishment of a supply chain so that customers can see themselves involved in the process rigorously [11]. Even though the electric car produced by Tesla is a mainstream form of the electric vehicle only possible at an industry level, the idea revolving Tesla’s
76
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
innovation in battery swapping stations and replacing batteries within minutes proves to further motivate the feasibility of this product. With the outlook of people exchanging parts of its vehicle on a frequent basis, this leads to further opportunities such as having a supply chain to build an electric vehicle in your own garage. An effective strategy for distributed manufacturing is modular design [12]. In the computer industry, few large companies used to build and sell only complete and integrated products where the hardware and software from operating system to application programs were wholly designed and build by the computer maker and sold as a complete computer. With modularity, computer companies do not need to build a complete computer to sell it. For example, a desktop computer can be constructed with individual modules, built together and used as a sole product. Companies could push a product to market more quickly by designing, building and selling modules rather than a whole computer. This then outlines various possibilities for the car-making industry where it could be constructed like the computer. Hence, this proposal discusses on the feasibility of a business that constructs an electric car in a garage or workshop which involves various components sourced from local and global suppliers.
2. Market Research Victorian Department of Transport has committed to conduct trials of electric vehicles in Victoria under the Victorian Transport Plan for a period of five years [13]. These trials are used to provide real-world information on the use of electric vehicles in Victorian conditions, including impact on driver behaviour, refueling patterns and vehicle performance and efficiency. The trials asserts that while commercial and government fleet purchases are particularly important markets for new car sales and particularly for early electric vehicles market, the drivers of the purchasing decision for commercial and government fleet vehicles are likely to be very different from private purchasers. However, it is not possible to assess consumer willingness to purchase electric vehicles based on revealed purchasing behaviour as there are a small range of electric vehicles available worldwide. It may be possible to approximate who would be likely early market electric vehicles adopters on the basis of who is currently purchasing hybrid vehicles. Hybrid vehicles share a number of key features with electric vehicles, including significantly improved fuel efficiency, a price premium relative to comparable internal combustion vehicle and perceptions of innovativeness of technology. Moore [14] suggests that there is separation between adopter groups, representing next group’s reluctance to adopt the new product in forms that appealed to previous adopters. New technologies that gain liking by early adopters and followed strongly amongst these consumers may encounter difficulties to find mass acceptance or require time to transition from niche to mass-market appeal. Early adopters of hybrid and electric vehicles are expected to share certain key characteristics. In accordance with Turrentine et al [15], hybrid vehicle purchasers made their purchase decision due to ideological reasons and not mainly to save money and they also paid little attention to fuel costs. Higher fuel economy of the hybrid compared to other vehicles purely remains as a self-satisfaction for their vehicle choice. Kurani and Turrentine [16] found that hybrid vehicle purchasers talked about ‘making a commitment’: setting an example, being a pioneer, talking to other people about their
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
77
car. These consumers were appealed by the new technology, the low emissions, sense of consuming fewer resources and tax incentives such as fuel or carbon taxes. The remaining issue is the availability of this type of vehicles as a customer’s package. Visvikis et al [17] reviewed the potential safety risks of EVs. They concluded that there is still a gap in the type-approval legislation relating to the safety and integrity of the rechargeable energy storage system. Concerns have been expressed about the safety of cyclists and pedestrians (particularly visually impaired people), when crossing the road. Even with an exemption for small series production vehicles that all parts and components are homologated, type approval of EVs is still not guaranteed for the whole vehicle automatically.
3. Technical Analysis Technical analysis of the manufacturability may be grouped into those pertaining to inputs, throughputs and outputs analysis of the electric vehicle [18]. 3.1. Input Analysis Input analysis is mainly concerned with the identification, quantification and evaluation of manufacturing requirements including machinery and materials used. The quality of inputs available in a certain timeframe and cost throughout the life of the project should be properly detailed. If applicable, long-term contracts with potential suppliers should be recorded to cultivate supply sources. In general, the main components for an electric vehicle as shown in Figure 2. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Electric Motor Battery Pack Motor Controller Contactor Fuse Vacuum Pump DC/DC Converter Instrumentation Power Steering Pump Battery Charger
Figure 2. Main components of an EV [19]
Other components that are similarly required for electric cars and common to conventional cars include: • Body and main parts: This includes the bonnet (hood), bump, fascia, fender, grille, pillar, quarter panel, radiator core support, roof rack, spoiler, trim package, trunk, valance, and welded assembly. Additionally, doors and windows are standard as well.
78
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
• • •
Electrical and electronics: This includes audio or video devices, charging system, electrical supply system, gauges and meters, ignition system, lighting and signaling system, sensors, starting system, switches and wiring harnesses. Interior: Interior components mainly include floor components and car seats. Powertrain and chassis: For example braking system; anti-lock braking system (ABS), engine components and parts. As such, a conventional car will require engine oil system, exhaust system, fuel supply system which however is not the case for electric car.
•
Miscellaneous auto parts: Some parts include air conditioning system (A/C), bearings, hoses, windshield wiper system, air bags, horn and other small parts for a car. EV purchase prices relative to conventional vehicles are influenced by EV batteries and by final price-point by sellers which will reflect their positioning in each market and business case that establishes each vehicle development program. Given the underdeveloped state of EV manufacturing, it is difficult to characterize maintenance schedules and costs of EVs. EV powertrains are however much simpler than combustion vehicles and have fewer moving parts [20]. Consumable items found in combustion engines (belts, seals, filters, sparkplugs, valves, lubricants, etc.) do not exist in EVs. Meanwhile, maintainable parts that are common to EVs include electronics, cooling fluids and radiators, fans and pump, driveline lubricants, wheel/axle bearings, brake pads and tyres, and air-conditioning systems.
4. Outline of a Business Plan A business model is required to describe the rationale of how to create, deliver, and capture value in terms of economic, social and innovation for this business. 4.1. Location Location of the business will be ideal for distribution of products to customers while being close suppliers or port that handles overseas shipping. From statistical information in Victoria [21], it is found that the majority of potential early electric vehicle adopters are located in band east of Melbourne CBD from bayside in the south through to Nillumbik in the north proportion of households that meet early adopter’s criteria (Figure 3). Meanwhile, in order to ensure location of the business should be located near suppliers or the port to ensure ease of obtaining supplies of materials for business operation. Anther factor is to locate business near the customer as the business location works as a showroom while being side-by-side to the assembly site. From the technical analysis, the factory for assembly EVs does not need to be big. A typical industrial site with an area of 1,220 m2 at the south east suburb of Melbourne only costs $97,700 pa to lease.
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
79
Figure 3. Households meeting electric vehicle early adopter criteria [22].
4.2. Nature of Business This business aims to provide its customer a cheaper and more economical alternative to purchase electric vehicle where components are sourced locally and overseas, assembled and build-to-order. As this business requires an extensive logistics and supply chain, it will adopt two different business models that are inter-related to look at delivering the seven basic components as mentioned. The two business models are direct sales and ‘bricks and clicks’ model. Direct sales model is to market and sell products directly to consumer without the need to go through dealership. The benefits of doing so are firstly, lower the cost render to buyers when they purchase their components or service directly from us. By having the buyers to deal with the factory directly, there is no worry of getting a different price at a different store, which might be the case when going through a dealer that could vary its price for the same component. This one-price-anywhere concept will boost the confidence of consumer on our components, products and service by giving the same treatment and pricing statewide. As this business is fairly new in the country, increasing the confidence of potential consumers should be not overlooked. The ‘bricks and clicks’ model is to integrate both offline (bricks) and online (clicks) presences. This way allows the customer to order products either online or physically in one of the stores. Also, prior to any purchase, this allows them to learn about the products thoroughly from home or visit the store to physically try on the products. Online marketing features components features, specifications, pricing, and pictures to allow potential consumers to learn about the products and services available, or at least have access to information from home to build up the interest.
80
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
With two models, the products and services are made available for access by consumers as well as providing contact information and nearest store location. By having a simplistic yet detailed approach towards the business would garner positive response and interest from potential consumers on the new innovation of this project. 4.3. Components sourcing and assembly There are different methods to source components. Initially, components can be sourced from either locally or globally depending by the price and ease of accessibility factor. An ideal profit margin has to be there for any possibility of success of the business. Three models are proposed: • Medium: Four seater 4 door. • Small: Two seater 2 door • Large: Five seater expandable to seven Parts are sourced and stocked according to these models. The nature of the business provides service to assemble the electric vehicle once the order is made as the electric vehicles are built-to-order. Assembly of components is handled by a local factory. There will also be on-site inspection and service available for customers within a range of services that could be done at the customer’s location. 4.4. EVs in Operation In the future, the possibility of setting up charging station and battery swapping infrastructure for customers can be established when enough electric vehicles are running on the roads. Battery charging and Battery swapping are two completely different approaches and will require further work [23]. For battery charging service, although many battery system manufacturers try to reduce the charging time, the minimum time for a full charge EV is still in the range of an hour. For battery swapping, it is clear that users are not confident to exchange their batteries with others with unknown history. A different type of battery hire and exchange service system needs to be created. 4.5. Distribution As a start-up business, the business only plans to provide its products locally. As a means of distribution, customer can pick up the products at the business location or the final product can be sent on land at the customer’s cost. The estimate supply lead time is approximately 20 days for the parts to arrive. However, certain parts will be ordered in bulk and therefore require zero lead time as it is stored in the factory. The assembly process will take up to three days which however could be reduced with multiple orders, if they are built together. This will give a total delivery time approximately three weeks from sourcing to delivery to customer. 4.6. Costing and Selling Price This business assembles cars in a small workshop. The personnel will be qualified staff with relevant knowledge in electric vehicle assembly process (Table 1). As such, the
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
81
business will require engineers and technician to be in charge of the technical process while front desk staff is also required to handle the business side in terms of customer query and ordering process. Table 1. Costing plan Miscellaneous Cost Rent Labour
Cost per month $ 8,100 $41,000
Marketing Utilities Total
$ 1,000 $ 500 $50,600
Description 97,700 per annum Includes staffing of 5 qualified mechanics, 3 frontdesk/management staff and reward to entrepreneur of this business. Web hosting and other marketing fees General expenses
Assuming that total miscellaneous cost is shared among 5 cars (estimated orders per month). The cost of the small assembly shop can be apportioned to the production quantity. By adding the component and car body costs, and with a 30% profit margin, the selling price of a EV can be sold at the factory price in Table 2. Table 2. Factory selling price Models A B C
Cost (Main Components) 2800 2800 2800
Cost (Car Body) 2650 5300 7950
Total Miscellaneous Cost 3535 3535 3535
Total Cost 8985 11635 14285
Selling Price (30% on top of Total Cost) 11680.5 15125.5 18570.5
4.7. Legal and administrative This investigation focuses on the viability of supply chain. The legislation issue is an engineering issue that is outside the scope of this study. Hence, the business system considered here assumes availability of a type approval for a family of EVs. Other legal and administrative issues include choice of the form of business organisation, registration and clearances and approvals from diverse authorities. The electric vehicle manufactured by the business will require a ‘Vehicle Sales Licence’ that allows buying, selling and auctioning vehicles other that motor cycles, caravans or campervans in the country [23]. Registration of vehicle is also required with Vicroads to ensure it meets necessary safety and environmental standards [24].
5. Risk Assessment The risks in this business can be assessed from the supplies. Suppliers have the choice to exercise bargaining power on participants in an industry by raising prices or reducing the quality of purchased goods and services. Powerful suppliers may earn more of the value by charging higher prices, limiting quality of services or shifting costs to industry participants. The business relies on the components supplied by suppliers that are based locally and globally. The business model works on having the electric cars to be built-to-order where it can be considered to be ‘just-in-time’ for the components to arrive as they are needed and when there are orders. Any unexpected delay or unreliability from the supplier can cause a major upset in the process of delivering final product to the
82
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
customer within the promised time. This can further lead to losing confidence from the customers and potentially lose out any form of market share. The rivalry among existing competitors may take on many forms such as price discounting, new product introductions, advertising campaigns, and service improvements. High level of rivalry limits the profitability of an industry. The degree of which rivalry drives down an industry’s profit potential depends on the intensity with which companies compete and the basis on which they compete. The above risks need to be mitigated. The company needs to prepare for any unreliable suppliers that could be the ones to provide main components for the electric vehicle. It is also dangerous for the company to be complacent with the current products available especially facing with a diverse market for electric vehicles, not to mention to compete with conventional cars. Therefore, the company should aim at building up its reputation through the products and services it provides to be a class on its own, even if selling a cheaper product as compared to the other alternative products available, thus able to put the company head-to-head with the competitors.
6. Conclusion Due to the design characteristics of electric vehicles, it is possible to harness the power of an effective global supply chain to manufacture the vehicles at a location close to the customers. Modularity as practiced in the computer industry has shown prospects of building an integrated product from individual modules. By integrating modular design, electric car can be built from different modules that are grouped by mechanical, data and power interface. A feasibility study was carried out to know whether a sizeable market for the proposed product exists, what would be the investment requirements and how to go about to establish that idea. The market analysis indicates the location of potential early adopters within Victoria which provides a good indicator of where the business should be located. The technical analysis was also about to identify the required components of the electric vehicle. A business plan was then constructed to define the business of electric vehicle. The components required to build an electric vehicle was also identified, leading to selection of suppliers for parts and material required. Three products were proposed categorised as 2-door, 4-door and family size variants to provide a variety of products to market. The pricing is found to be reasonable compared to traditional fossil fuel powered vehicles.
References [1] [2]
[3] [4] [5]
J. Sousanis, World Vehicle Population Tops 1 Billion Units. WardsAuto, August 15, 2011, viewed 20 April, 2014, http://wardsauto.com/ar/world_vehicle_population_110815. D.A. Kirsch, The Electric Vehicle and the Burden of History: Studies in Automotive Systems Rivalry in America (1890-1996), pub. New Brunswick, NJ: Rutgers University Press, ISBN 0-8135-2809-7, 291 pages, 2011. Y. Hori, Future vehicle driven by electricity and Control-research on four-wheel-motored “UOT electric march II”. IEEE Transactions on Industrial Electronics, 51 (2004), 954–962 H. Shimizu, J. Harada, C. Bland, K. Kawakami, C. Lam, Advanced concepts in electric vehicle design, Industrial Electronics, IEEE Transactions on Industrial Electronics, 44 (1997), 14–18 C. Lampton, How Electric Car Batteries Work, 2013. Retrieved 05 June 2014, from http://auto.howstuffworks.com/fuel-efficiency/vehicles/electric-car-battery3.htm
J.P.T. Mo / Assessing the Requirements and Viability of Distributed Electric Vehicle Supply
[6] [7] [8]
[9]
[10] [11] [12]
[13] [14] [15] [16] [17]
[18] [19] [20] [21] [22] [23] [24]
[25]
83
Cars UK, Electric cars produce MORE CO2 than petrol or diesel cars, 2013. Viewed 20 April, 2014 from http://www.carsuk.net/electric-cars-produce-more-co2-than-petrol-or-diesel-cars/ J. Wry, Electric Vehicle Technology Explained. pub. John Wiley, ISBN 0-470-85163-5, (2003), 183– 195. Y. Cheng, J. Van Mierlo, P. Van den Bossche, P. Lataire,. Energy Sources Control and Management in Hybrid Electric Vehicles. Proc. 12th Int. Power Electronics and Motion Control Conference, Portoroz, Slovenia, 30 Aug. – 1 Sep. 2006, 524–530 S. Sakai, Y. Hori, Advanced Vehicle Motion Control of Electric Vehicle Based on the Fast Motor Torque Response. 5th International Symposium on Advanced Vehicle Control, Michigan, USA, 22-24 Aug., 2000, 1–8 M. Levi, How Tesla Pulled Ahead of the Electric-Car Pack, 2013. Retrieved 25 June 2013, from http://online.wsj.com/article/SB10001424127887324659404578504872278059536.html B.M. Beamon, Supply Chain Design and Analysis: Models and Methods. International Journal of Production Economics, 55 (1998), 281-294 M. Sako, Modularity and outsourcing: The nature of con-evolution of product architecture and organisation architecture in the global automotive industry, Ninth GERPISA International Colloquium, Permanent Group for the Study of the Automobile Industry and its Employees (GERPISA), Paris, 1113 June, 2013 C. Inbakaran, J. Rorke, Potential Early Adopters of Electric Vehicles in Victoria, Victorian Transport Plan, Department of Transport, Victoria, 2009. G. Moore, Crossing the Chasm, Marketing and selling high-tech products to mainstream customers (revised edition), HarperCollins Publishers, New York, 1999. T. Turrentine, K. Kurani, R. Heffner, Fuel Economy: what drives consumer choice?, Access, pub. University of California, 31 (2007), 14-19 T.S. Turrentine, K.S. Kurani, Car buyers and fuel economy?, Energy Policy, 35 (2007), 1213-1223 C. Visvikis, P. Morgan, P. Boulter, B. Hardy, B. Robinson, M. Edwards, M. Dodd, M. Pitcher, Electric vehicles: Review of type-approval legislation and potential risks, Client Project Report CPR810, ENTR/05/17.01, Transport Research Laboratory, UK, 2010. T.J. Erickson, J.F. Magee, P.A. Roussel, K.N. Saas, Managing Technology as a Business Strategy, Sloan Management Review, 31 (1990), 73-78 E.G. Durney, Re-Inventing Carmaking with Truly Electric Cars, A Truly Electric Car Company, Millbrae, CA, USA, Patent Application 20110022545, 2011. A. Simpson, Full-Cycle Assessment of Alternative Fuels for Light Duty Road Vehicles in Australia. Proc. World Energy Congress, Sydney, 5-9 September, 2004. Australian Bureau of Statistics, 9208.0 – Survey of Motor Vehicle Use, Australia, 2008. viewed 31 October 2014, from www.abs.gov.au Australian Bureau of Statistics, Census of Population and Housing, 2006, viewed 30 October 2013, from www.abs.gov.au H. Schaede; A. Von Ahsen, S. Rinderknecht, D. Schiereck, Electric energy storages – a method for specification, design and assessment, Int. J. Agile Systems and Management, 6 (2013), 142-163 Department of Commerce, Motor vehicle dealer’s licence, Government of Western Australia, 2013. viewed 30 June 2014, from http://www.commerce.wa.gov.au/consumerprotection/ content/motor_vehicles/Dealers/Categories_of_dealer_licences.html Vicroads, What has to be registered?, 2013. viewed 28 October 2013, from http://www.vicroads.vic.gov.au/Home/Registration/WhatHasToBeRegistered/
́
84
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-84
A Model for Storing and Presenting Design Procedures in a Distributed Serviceoriented Environment Oleg KOZINTSEV a,1, Alexander POKHILKOa, Leonid KAMALOVa , Ivan GORBACHEVa, Denis TSYGANKOVa a Ulyanovsk State Technical University, 32 North Venets st., 432027 Ulyanovsk, Russian Fed. Abstract. The interoperability of distributed and concurrent design is defined by the representation of CAD systems project solutions. Nowadays the exchange of solutions performed by the means of standard ISO 10303 and STEP format, created in the frames of CALS technology. It does not allow to modify solutions. This article describes project solutions extraction capability from design activity, models of design object classes formation, as well as to store, to map and further to use information in the context of the functionally adapted representation. The functionally adapted representation concept allows exchanging solutions by the other method. In this case, the main idea is the fixation of solution creation procedure (consists of operation chain) in the procedure chain. Based on the procedure chain author’s instrumental environment generates the lightweight CAD. It is a compact software application for engineering. The required operation set for the design object creation determines the lightweight CAD functionality. In this case, the logic and the modification ability of project process is stored inside of the selected project operation class. Keywords. integrated product design, concurrent design, engineering process automation, CAD.
Introduction The experience of the development of computer aided design technologies brings to the front the need for research and development of methods for presenting design solutions, which provide not only the exchange between different information systems or services, but conservation methods, modifications and extensions of computer representations design solutions [2]. As a theoretical basis from the mathematical model that allows us to describe design solutions based on the formalization of the process for their preparation in the form of multi-level protocols [3]. From the structure of information representation of the design solution, it is clear that there is a basis for generalization. The design solutions in the form of a class is a top-level abstraction of information representation [4]. Further development of the technology requires the support of sharing the results of the activities within the «virtual enterprises» and the possibility of rapid changes of solutions (for example, changing customer requirements). Currently it can only be obtained fully through the introduction to all participants of the «virtual enterprise» the software of the same vendor. Introduction of the different kinds of format converters and additional processing means of results of design process leads to a loss of logical 1
email:
[email protected];.
O. Kozintsev et al. / A Model for Storing and Presenting Design Procedures
85
connections within the model of the object being designed, and therefore a violation of its integrity. Impossibility of full exchange between different systems arises due to the partial or complete lack of interoperability. Therefore, achieving interoperability remains a serious problem, making it difficult to fully exchange of the results of project activities between different CAD systems. Currently offered approaches to solve this problem cannot be implemented for several reasons: all solid-state design automated systems will never have exactly the same set of functions, as will lose its competitive advantages as well as it is impossible to get everyone to use only one and the same CAD application[5]. In this paper authors examine the structure of information representation of design solutions in the functional adaptable form (FA) [6], which allows to implement the design solutions generalization (i.e. information object classes) in a form of computer applications.
1. Problem definition It is needed to create a toolkit that would comply that the following requirements: x the possibility of integrating with different applications; x binding heterogeneous information processing for special applications; x working with content of design solution; x creation of independent applications that process design solution within the framework with the functionality used for its construction. For the possibility to create a toolkit it is necessary to divide the flow of information between the modules of the system in order to ensure that the work of specialized applications. Develop and implement a database and a control module, which would provide: x storage of design solutions; x dynamic filling set of design solutions; x control of external functional modules-environment components, data exchange between modules and keeping them; For the possibility to form the independent applications, each module toolkit should be presented as a source code, created so that only the required functionality may be selected and compiled on the basis of its "lite" version of the module. Accordingly, the compiler must be enabled in the toolkit.
2. Model and content solutions As practice shows, the creation of a system that uses an Integrated Information Environment providing homogeneity data and automation of data exchange throughout the design process and product lifecycle, can afford only very large enterprises. There is no possibility to create a system that would satisfy all, this leads to the idea of creating tool that allows combining existing and currently used in the design process and product lifecycle. A simplified diagram of such a system shown in Figure 1.
86
O. Kozintsev et al. / A Model for Storing and Presenting Design Procedures
Figure 1. Simplified scheme of toolkit.
The emergence of the concept of CAD FA led to the need to create a system that operates with a model of the design process on the same principles, but has the ability to allocate design process as an object (or class of objects) in an independent application. For this purpose the general scheme of the toolkit was changed. Each application can work with the user; it appears as a functional subsystem. Each subsystem provides a different interface for interaction through the control module to the overall user interface. Thus, the system captures and stores a protocol of user actions. Control module provides the means to organize stored sequence of user actions in the form of design procedures, allows the user to generate CAD FA based on this sequence. CAD FA included in the environment subsystem generation is responsible for the process of retrieving the required functionality and compiled an executable file CAD FA. Simplified diagram of the developed system shown in Figure 2.
O. Kozintsev et al. / A Model for Storing and Presenting Design Procedures
87
Figure 2. Simplified scheme of toolkit build CAD FA.
3. Model representation of the process producing design solutions Previous research shows the efficiency and accessibility for user and programmer method of representation of the design process in the form of an AND-OR graph. Vertices of the graph are the elements of the technical system (TS) and their attributes, and the arcs show the hierarchical subordination between the elements and their attributes, as well as signs of identity elements. At the vertices of the set elements like different roles, as well as signs of elements belonging to different groups of symptoms. Top or combined alternative elements performing the same or very similar functions and features that characterize each TS. There are many ways to achieve technical solutions (passing over the tops of the AND-OR-graph) can be represented as a logical record (predicate), where if all the condition is "true", otherwise – “false”. The written form is as follows:
P( A & ( B C D) & F
[0;1]
(1)
where A, F - elements that perform different functions; B, C, D - alternative elements that perform similar functions. The system of equations presented in the form of a predicate, and there is a satisfying a technical solution. Shortcomings of existing implementations of this method: x incomplete set of options TC; x inability to make changes; x inability to add new solutions. The dynamic content plurality of design decisions may be an another approach, i.e. possibility to create the graph during the actual design process, which immediately removes the disadvantages described above, but imposes more stringent requirements
88
O. Kozintsev et al. / A Model for Storing and Presenting Design Procedures
on the handling and storage of information. During the analysis of the problem was highlighted in several entities, implementation and use of which would make it possible to dynamically build a graph of the project, use the obtained solutions TC is convenient to display the accompanying project overhead and reference information. x Project (Prj) - abstraction, which includes a decision tree and all related information. x The design procedure (DP) - action performed by the user. Separate semantic unit. It is in turn divided into 3 elements: x Parameter (Param) - abstraction, interpreted according to the context in the system may comprise a constant calculation, SQL - query online request. Can store any information, the interpretation of which will depend on the context in which the option applies. The model is based upon: x set of projects, Prj{Prji }, where key - a unique number identifier of the project (a unique numeric value entry in the database, (hereinafter will be omitted)); Name - name of the project (short semantic content); Desc - description of the contents of the project; x set design procedures DP{DPi }, where Type - the type of elements of the set (1 - P1, 2 - P2, 3-C); Name name of the element that reflects the semantic content; Desc - detailed description; NOP1 - a unique number that identifies an element of the set on which a transition occurs after the condition C; NOP2 - a unique number that identifies an element of the set on which a transition occurs after the nonfulfillment of conditions; x set of procedures type 1 P1{P1i}, where OpID - a unique number that identifies an element of PP, which is uniquely associated with this element; Content - element content (text macro, script, etc.); x set of procedures type 2 P2{P2i }, where OpID - a unique number that identifies an element of PP, which is uniquely associated with this element; ParamID - a unique number that identifies an element of P, the numerical value of which will be used in the implementation of this element of the set P2; x set of conditions C{Ci}, where OpID - a unique number that identifies an element of DP, which is uniquely associated with this element ; ParamID - a unique number that identifies an element of Prj, a numerical value that will be applied on the condition ; ParamID1 - a unique number that identifies an element of Prj , the numerical value of which will be used in the implementation of this element of C (the left end of the range ); ParamID2 - a unique number that identifies an element of Prj , the numerical value of which will be used in the implementation of this element of C (right end of the range ); Conditionproper condition on the element of Prj x set of parameters Param{{Пi}, i},, where ProjID - a unique number that identifies the element belonging to the project; Name - name of the item; Type - the type (number), which is interpreted according to the content of the element; Content - content of the element; Desc - description of
O. Kozintsev et al. / A Model for Storing and Presenting Design Procedures
89
the item; Value - a numerical value obtained when possible interpretation of the contents of the element. The design process is a set of elements of the DP. Each element is a separate DPi realized subtask design. Thus, it is possible to describe the project Prj using design solutions implemented:
Prj P11 & P21 & C1 (( P1i & P2 j & ...)or (Ck (...)))
(2)
where i, j, k - indexes for items in of sets adequate design situations. Create a new project and updating solution begins with the separation of a task pane, which will fit the project (the creation or selection of elements of the set Pr). Followed by recruitment and (or) change all the other elements of the sets in the context of this project, which allows you to automatically maintain the connection between elements of the sets. Through these relationships in the future, it is possible to restore the project tree. Playing a design solution is to select the tree item and passing through the branches of the tree to the processing elements of DP. Since each project consists of a set of procedures of project operations, (elementary operations available to the user when operating the system) can be uniquely bind classes of application source code in which they are implement. Special organization and allocation of the necessary classes provide the ability to generate “lite” version of the application have a set of functionality sufficient for the construction and editing of this design concept. To implement the above-considered approach has developed a system consisting of three components: the control module and a graphical editor based on the geometrical kernel Open CASCADE, and interface module supports MathCAD.
4. Approbation The system has been tested on a variety of common tasks of designing machine parts and assemblies. In its original form the design process is described as a typical method, including a description of how the problem in the form of text and graphic sketches and sequence of mathematical calculation. Confirmed in principle the system is operating, its ability to keep the accumulated design solutions. Functionality of the system corresponds to the initial assumption. During testing, we considered the problem of designing cutting tool, based on this task was generated CAD FA for designing: x prismatic cutting tool; x round cutting tools; x prismatic and round cutting tools.
90
O. Kozintsev et al. / A Model for Storing and Presenting Design Procedures
Figure 3. Examples of completed tasks.
Thus, we can conclude that selecting a tree branch design solutions manage to get CAD FA for designing different classes of objects (Fig. 3).
5. Conclusions The computer modelling shows that it is possible to create such integrated toolkits and successfully save the design data, the concept data, to create design process models and to model the classes of design objects. In the perspective, the authors intend to: x upgrade the calculation algorithm in the math processor; x add the parametrical tools; x provide tools for the assemblies design; x optimize user interface; x add the service modules to interact with other applications; x implement the new model of design data description based on a streaming graph The further development of the technology and full-scale math system based on the math core built in the image of Open CASCADE geometrical core will provide an option to develop interactive toolkit of CAD FA. Such a system would provide an opportunity to instrument engineering workstations with special design tools based on CAD FA.
References [1] E.V. Sudov, Integrated information life cycle support machinery. Principles. Technology. Methods. Model, Moscow, MVM, 2003, 264. [2] M. Sobolewski, Foreword Next Generation Concurrent Engineering: Smart and Concurrent Integration of Product Data, Services, and Control Strategies, M. Sobolewski & P. Ghodous (eds) © 2005 ISPE, 620. [3] A.F. Pohil'ko, Proceedings of the Congress on intelligent systems and information technologies “AISIT’09”. Scientific publication in 4 volumes. Moscow, Physmathlit, 2009, Vol. 2, 52–53.
O. Kozintsev et al. / A Model for Storing and Presenting Design Procedures
91
[4] Grady Booch, Object-oriented analysis and design with applications, Third Edition. Moscow, Williams, 2008, 720 p. [5] P.Hamilton, CAD/CAM/CAE Observer, 2008, № 2 (38), 34–36. [6] A.F. Pohil'ko Proceedings of the Congress on intelligent systems and information technologies, AISIT’10, Scientific publication in 4 volumes. Moscow, Physmathlit, 2009, Vol. 2, 104–106.
92
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-92
Life Cycle Costing for Alternative Fuels a
Tim CONROYa,1 and Cees BIL a School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University, Melbourne, Australia
Abstract. Motivated by concerns over rising costs of Jet-A fuel and the current limitations of “drop-in” fuel substitutes, it is proposed that methane (LCH4) provides a promising sustainable aviation fuel alternative. Following consideration of current aircraft fuel costs, a modified preliminary Life Cycle Costing Analysis is provided for a proposed liquid methane aircraft, providing direct economic comparison to a current comparator aircraft with regards to the annual fuel cost, aircraft acquisition costs and the cost of required airport infrastructure. Finally, a preliminary evaluation of the financial benefits of a transition to liquid methane fuel in the aviation sector is presented. Keywords. Life Cycle Costing, Liquid Methane, Liquefied Natural Gas, Air Transportation
Introduction Airlines globally are financially affected as the combined impact of rising fuel prices and introduction of CO2 taxation schemes reduce profitability [1,2]. In the past decade alone, the price of jet-fuel has quadrupled and the fuel component of Direct Operating Cost (DOC) has increased from 14% to 30% in 2013 [3]. Currently, airlines attempt to improve their financial position by downsizing or restructuring their operations [4]. This strategy has only limited effectiveness and avoids addressing the central DOC problem. With an increasing demand for jet-fuel and a reduction in global supply, the price of fuel is projected to increase further [5]. The air transport sector faces a considerable challenge in reducing its cost base to keep air travel affordable and environmentally sustainable. Over the past decades, fuel consumption per passenger-km has reduced significantly due to advances in aircraft technologies such as advanced materials, improved aerodynamic efficiency, and turbofan propulsion. The aviation industry is currently concentrating its initiatives on “drop-in” fuel solutions to achieve the necessary eco-economic transformation from petroleum derived Jet-A-fuel. The two major proposed solutions are biofuel and synthetic kerosene (Syn-Jet) made from natural gas/coal through the Fischer-Tropsch (FT) process. Biofuels pose a greener solution being a full circle fuel, but their current production rate and market price, coupled with their competition with food industries for arable land, limit their future use in aviation. Conversely, the CO2 emission produced through the FT process far outweighs the CO2 savings “Syn-Jet” proposes. “Drop-in” fuels are currently being 1
Tim Conroy, UG student, Bachelor of Engineering (Aerospace) and Business (Management), email:
[email protected]
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
93
used experimentally in a blend with kerosene, but are still a long way from being commercially viable. Use of Liquid Natural Gas (LNG), comprising upwards of 90% methane, is already being used successfully in both automotive and maritime applications. It has also been explored as an aviation fuel. Beech aircraft company successfully flew a Beech Sundowner light aircraft on LNG in 1980 [6]; Lockheed performed a major LNG study in 1986 [7]; and Tupolev flew Tu-155 test aircraft on LNG in one engine in 1989, followed by Tu-156 for over 100 flights on LNG [8]. More recently, Greitzer et. al. proposed LNG as future fuel in NASA “SUGAR N+4” research study; and Kawai (Boeing) presented a convincing case on a dual-fuel system (LNG/Jet-A) Blended Wing Body aircraft that was strongly influenced by Gibbs et. al. [9,10,11]. However, LNG fuel applications have not extended to commercial fleets. Previous LNG feasibility studies raised questions over airport compatibility, safety and Technology Readiness Levels (TRL) [7]. Recent advances in land and naval LNG safe storage and transport demonstrates its viability in aviation [12], a testimony validated by Bil et. al. which illustrated the preliminary feasibility of LNG an Airbus A320 size aircraft, and which forms the baseline for calculations conducted herein [13]. Transition to LCH4 fuel will reduce airline DOC. Currently, fuel is 33% of DOC and LCH4 is less than 30% of the cost of jet-fuel [3,14]. This gap will widen as the cost of jet-fuel increases due to limited availability. Multi-national carbon emissions policies increase airline DOC [15]. Environmentally, LCH4 use will reduce CO2 emissions by 20% compared to jet-fuel, reducing carbon tax commitments. Consequently, the reduction in DOC will allow a reduction in fare prices, supports customer growth and increases income streams. LCH4 can be created from LNG and/or sustainable biogas generated from biological waste. This ensures a more sustainable supply of LCH4 in the future and induces price stability. To assess airline DOC reduction from LCH4 fuel use, an investigation was conducted into the relative prices of competing fuels, the influencing factors governing these prices and the key impacts that may have on other aspects of airline DOC through stakeholder consultation and traditional research methods. Figure 1 shows the large divergence between the jet-fuel and LNG prices in the past 5 years. LNG is currently less than 30% of the per energy cost of jet-fuel. Figure 2 and 3 show the global untapped reserves of shale gas, which, as fracking technology opens access to in the coming years, will significantly lower the LNG cost in high growth areas [16].
Figure 1. Comparison of jet-fuel and LNG prices.
94
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
Figure 2. World LNG Estimate Prices [17].
Figure 3. Map of major shale gas basins [18]
1.
Life Cycle Costing
Traditionally, Life Cycle Costing (LCC) is the holistic analysis of the total cost of ownership of an asset from its initial acquisition to its end of life disposal [19]. It is typically used to determine the most economically rational option between competing alternatives that cannot be split based on technical appropriateness. This report presents a somewhat modified approach on the traditional take on the Total Cost of Ownership (TCO) style of LCC. Whereas traditionally, every cost element of each technical alternative is assessed and summated for overall cost comparison; the approach taken in this report more simply, assesses the particular cost elements that are deemed by the research team to have the greatest comparative impact on the overall life cycle cost of an LCH4 aircraft relative to a current baseline comparator aircraft. Additionally, contrary to traditional application, this report assesses the TCO from the perspective of the global commercial aviation industry in the event of a worldwide fleet introduction, as opposed to an individual aircraft acquisition by a particular company. The three key cost elements that were seen to have a significant bearing on the relative TCO of an LCH4 aircraft compared to a Jet-A kerosene baseline aircraft were identified as the cost of fuel for operation, the acquisition cost of the aircraft and the airport airline charges (which have been assumed as a worst case scenario where airlines shoulder the entire cost of required infrastructure for a new fuel). Analysis of the proposed design [13] concluded that the relative airline cost components pertaining
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
95
to the turn-around time, cargo capacity, safety considerations and performance characteristics should not be significantly affected. It should be noted that the design that forms the basis of the calculations remains conceptual, and therefore maintenance (including opportunity cost of aircraft grounding) and disposal costs of an LCH4 aircraft have not been assessed; although based on the experience of the automotive and maritime industries with the implementation of the fuel are assumed to be relatively comparable with current Jet-A kerosene aircraft [12]. 2.
Extrapolation of fuel data
In order to provide an estimate of the comparative fuel costs for future years, the fuel prices for each year were estimated based on the percentage increase of the average yearly fuel price for the past 10 years (since 2003) for LNG (1.75%) and kerosene (7.33%), as shown in orange and maroon respectively in Figure 5 [3,20]. Whilst the extrapolation of the LNG price seems to align reasonably well with the recent development and the future outlook with the incorporation of shale gas reserves, it was highlighted that the continued rapid increase in the Jet-A Kerosene price projected, may be more severe than the actual development. To offset this, a more conservative projection, based on a projection of the future oil price provided by Airbus [20] has also been included in all calculations, as seen in Figures 4 and 5.
Figure 4. Contrasting Oil Price Projections.
Figure 5. Comparative Aviation Fuel Price Projections.
96
3.
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
Single Aircraft Costs
As previously stated, performance and fuel consumption calculations for the LCH4 case are based on an Airbus A320 type aircraft as described in Bil et. al. [13]; for comparison, the baseline Jet-A Kerosene aircraft is a standard Airbus A320 [21]. The analysis conducted was intended to be indicative of the average flight and subsequent fuel use of an A320 aircraft; therefore the flight profile was modelled simply, based on the Breguet range equation with an average flight of both aircraft of 1.8hr (~1500 km). ܹ ൌ ͳǤͲͺ ൈ ܹ ሺͳ െ ݁
షೃ ೡ ಽ ವ
ሻ
[22]
Wf = Weight of Fuel Required (plus 8% reserve quantity), W0 = Aircraft Take-off Weight, R = Aircraft Range, cT = Specific Fuel Consumption, v = Cruise Speed, CL/CD = Lift to Drag Ratio
Assuming that the design life is 60,000 flying hours over 25 years and that the aircraft are to be introduced in 2018, a comparison of fuel costs for a single aircraft for kerosene-based baseline versus the proposed LCH4 variant is presented in Figure 6. To approximate the acquisition cost for both an LCH4 aircraft and a regular Jet-A Kerosene A320, a payment plan was used incorporating a 15% upfront payment and subsequent 4.9% pa repayments, equal to the depreciation rate of an A320 aircraft, over an 18 year loan with 7.5% annual compound interest. The market price for an A320 is US$91.5 million and is assumed to be US$120 million for an LCH 4 aircraft [23,24].
Figure 6. Single Aircraft Yearly Fuel Costs.
Figure 7. Single Aircraft Cumulative Fuel and Acquisition Costs.
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
97
Both Figure 6 and 7 illustrate the extent to which the discrepancy in fuel price between Jet-A Kerosene and LNG affects the cost development. It can be seen in Figure 7 that despite the market price for an LCH4 aircraft being >130% of an A320, an airline operating an LCH4 would still make a saving in the first year of operation when considering fuel and acquisition costs. 4.
Global Introduction
To approximate the introduction of LCH4 aircraft to the global fleet and a subsequent transition away from Jet-A Kerosene fuelled aircraft, a gradual approach was considered, based on the production rate of A320 aircraft in 2012 (329 aircraft produced) [25]. The proposed transition includes a replacement of 10% of all Airbus A320’s produced in 2018 with a comparable LCH4 aircraft and then an additional 7.5% in each subsequent year until all aircraft produced are LCH 4 variants in 2030. Figure 8 shows a comparison of the total fuel costs for the global fleet of LCH 4 aircraft produced against an equivalent number of A320 aircraft. Similarly, Figure 9 shows the total fleet fuel costs comparison with the addition of the aircraft acquisition costs according to the payment plan described above.
Figure 8. Global Fleet Yearly Fuel Costs.
Figure 9. Global Fleet Yearly Fuel and Acquisition Costs.
98
5.
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
LNG Infrastructure at airports
One of the biggest challenges in LCH4 transition is the lack of infrastructure at airports. In order to provide LCH4 at airports, two possible scenarios were considered (Figure 10):
Storage & Refueling
LCH4 Truck
Pipeline Scenario 1: LCH4 delivered via trucks to storage tanks at the airport.
Liquefaction
Storage & Refueling
Scenario 2: Methane delivered via pipelines directly to airport on-site liquefaction facility.
Figure 10. LCH4 airport supply and storage.
The most viable scenario will depend on the scale of LCH4 operations at the airport and the vicinity of the airport to an existing liquefaction plant. Chevron’s Wheatstone LNG project in Australia will cost US$29.7 billion for a capacity of 8.9 million tonnes per annum [26]. In the LNG value chain, liquefaction accounts for 30 45% of the total cost, while storage and regasification makes 15 - 25% [27]. Liquid biomethane from biogas has very low feedgas cost but high upgrade and liquefaction costs [28]. According to TIAX [28], biogas to LNG liquefiers has three times the capital cost per capacity of LNG due to the high upgrade costs and low production capacity. Similar to LNG, capital cost decreases once capacity increases (i.e. mass production). Table 1. Infrastructure cost analysis for two different scenarios. Scenario 1 Infrastructure included No. of A320 flights/day Total fuel needed (kg/annum) Liquefaction cost (US$) Storage & refuelling infrastructure cost (US$) Total cost (US$)
Scenario 2
50 53.7 million -
Liquefaction, storage & refuelling facility 250 268.5 million 268.8 – 403.1 million
26.8 – 44.8 million
134.4 – 224.0 million
26.8 – 44.8 million
403.2 – 627.1 million
Storage & refuelling facility
Table 1 shows the estimated cost for infrastructure cost for fuel storage and transport based on the two scenarios proposed in Figure 10. The capital cost per capacity of LCH4 produced decreases as the on-site production capacity increases [28]. Scenario 1 is more attractive to smaller airports as it requires lower initial infrastructure investment but has limited supply capability. Due to the high cost of infrastructure, LCH4 may have a lower return rate but offers a stable long-term investment. To meet the airport fuel requirements for the proposed global introduction of LCH 4 aircraft described above, an airport infrastructure conversion proposal has been provided in Table 2, incorporating a combination of both small and large airport conversion scenarios. For the purposes of cost comparison, the higher cost for each scenario listed in Table 2 was taken and it was assumed that the cost of each individual
99
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
airport infrastructure project is incurred and payed in full by the aviation industry in the year in which it is listed in Table 2. Table 2. Airport Infrastructure Conversion Proposal. Year introduced
Total LCH4 Aircraft
Total No. of Small Airports Required
2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028
33 90 173 280 411 568 748 954 1184 1439 1719
5 8 15 25 34 46 57 72 84 101 116
6.
Small airport Infrastructure Costs (US$ Million/year)
Total No. of Large Airports Required
224.00 134.40 313.60 448.00 403.20 537.60 492.80 672.00 537.60 761.60 672.00
0 1 2 3 5 7 10 13 17 21 26
Large Airport Infrastructure Costs (US$ Million/year) 0 627.1 627.1 627.1 1,254.2 1,254.2 1,881.3 1,881.3 2,508.4 2,508.4 3,135.5
Overall Industry Comparison
Accounting for all cost components discussed (fuel costs, acquisition costs and new infrastructure costs), the total yearly cost savings resulting from the introduction of LCH4 aircraft compared to an equivalent number of baseline Jet-A kerosene aircraft for both the extrapolated and conservative fuel cost prognoses is depicted in Figures 11 and 12 respectively. It should be noted that for the new airport infrastructure cost component, there is no comparable cost incurred for Jet-A kerosene case as it is assumed that all required infrastructure is already in place for the new aircraft produced.
Figure 11. Global Fleet Yearly Fuel Costs.
For the case that the Jet-A kerosene costs continue to rise at the same rate as the past 10 years, the aviation industry will run a relatively slight deficit before breaking even after 3 years and experience increasing savings. Alternatively, for the conservative prognosis of the Jet-A kerosene price, the breakeven point occurs 7 years after the initiation of the proposed global transition. With regards to the relative fuel, aircraft acquisition, and infrastructure costs, the aviation industry could make a net saving of US$4 billion to US$47 billion within 10 years if LCH4 aircraft are introduced into the global fleet compared to the continued
100
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
use of the Jet-A kerosene aircraft. This net saving represents 0.6 - 7.5% of the total aviation industry’s 2012 DOC from only a very small fraction of the global aircraft fleet [3]. If the same rationale for LCH4 variant was applied to other aircraft models on a larger scale, the savings would greatly multiply.
Figure 12. Global Fleet Yearly Fuel and Acquisition Costs.
7.
Conclusion
The purpose of this study was to access the potential of using LCH4 as an alternative fuel source for aircraft and to provide a general analysis of some of the key costs in order to discern the appropriateness of further study. Although the scope and results of the study remain limited to the specific scenarios outlined, the impact that the current (and continually rising) Jet-A kerosene price is having on the commercial airline industry combined with the favourable results seen in the overall industry comparison illustrates the merit that further study and investment in LCH 4 for commercial aviation may have to the industry on the whole. The design of LCH4 aircraft is not a significant challenge as such aircraft have been designed and operated in the past. The most significant upfront investment is in the infrastructure required for supply and storage of LCH 4 at airports. However, if the price of kerosene continues to rise as expected, conservative estimates show a breakeven in about 7 years after transition to LCH4 is possible with a net saving of US$4 billion to US$47 billion within 10 years, if LCH4 aircraft are introduced into the global fleet compared to the continued use of the Jet-A kerosene aircraft.
References [1] [2]
BBC News. Cathay Pacific reports 83% plunge in annual profit. 2013 [cited 2013 12 March]; Available from: http://www.bbc.co.uk/news/business-21766398. O'sullivan, M. Rising jet fuel prices weigh heavily on Qantas, Virgin. 2012 [cited 2012 1 December ]; Available from: http://www.theage.com.au/business/rising-jet-fuel-prices-weigh-heavily-on-qantasvirgin-20120126-1qjm7.html.
T. Conroy and C. Bil / Life Cycle Costing for Alternative Fuels
[3] [4]
[5] [6]
[7] [8] [9] [10] [11]
[12] [13]
[14] [15] [16] [17]
[18] [19] [20] [21] [22] [23] [24]
[25]
[26] [27] [28]
101
IATA. Financial forecast. 2012 [cited 2013 3 March]; Available from: http://www.iata.org/whatwedo/Documents/economics/Industry-Outlook-Dec2012.pdf. Australian Aviation. More maintenance jobs go as Qantas continues engineering restructure. 2012 [cited 2013 25 March]; Available from: http://australianaviation.com.au/2012/11/more-maintenancejobs-go-as-qantas-continues-engineering-restructure/. Airbus S.A.S. Global market forecast 2012-2031: Navigating the future. 2012 [cited 2013 20th March]; Available from: http://www.airbus.com/company/market/forecast/. FLIGHT International. Beech flies with methane. FLIGHT International 1981 10 October [cited 2013 16 March]; Available from: http://www.flightglobal.com/FlightPDFArchive/1981/1981%20%203155.PDF. Carson, L.K., et al., Study of methane fuel for subsonic transport aircraft, in NASA CR1593201980, Lockheed-California Company: California. Tupolev. Cryogenic aircraft: development of cryogenic fuel aircraft. 1989 [cited 2013 10th February]; Available from: http://www.tupolev.ru/english/Show.asp?SectionID=82. Greitzer, E.M., et al., N+ 3 Aircraft concept designs and trade studies. NASA/CR—2010216794/VOL2, 2010. Kawai, R.T., Benefit Potential for a Cost Efficient Dual Fuel BWB, in 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition2013, AIAA: Texas. Gibbs, J., D. Seigel, and A. Donaldson, A natural gas supplementary fuel system to improve air quality and energy security, in 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition. 2012, American Institute of Aeronautics and Astronautics. Foss, M., Introduction to LNG. 2007, Texas: Centre for Energy and Economics. Bil, C, Dorrington, G.E., Conroy, T, Spiteri, L, Spiteri, M, and Burston, M Conceptual Design of Sustainable Liquid Methane Fuelled Passenger Aircraft. 2013, 20th ISPE International Conference on Concurrent Engineering. EIA. Price of U.S. Natural Gas 2013 [cited 2013 6 April]; Available from: http://www.eia.gov/dnav/ng/hist/n9103us3m.htm. ICAO Existing, planned or envisaged schemes including market-based measures. 2012. EIA, World shale gas resources: an initial assessment of 14 regions outside the united states, 2011, Independant Statistics & Analysis: Washington, DC. Federal Energy Regulatory Commision. World LNG Estimated April 2013 Landed Prices. 2013 [cited 2013 4 April]; Available from: http://www.ferc.gov/market-oversight/mkt-gas/overview/ngas-ovr-lngwld-pr-est.pdf. U.S Energy Information Administration. Price of US natural gas LNG imports. 2013 [cited 2013 March 15th]; Available from: http://www.eia.gov/dnav/ng/hist/n9103us3m.htm. Dhillon, B.S., Life Cycle Costing for Engineers, 2010, Taylor Francis Group, NW. Airbus S.A.S. Global market forecast 2012-2031: Navigating the future. 2012 [cited 2013 20th March]; Available from: http://www.airbus.com/company/market/forecast/. Airbus Aircraft Characteristics - Airport and Maintenance Planning. 2012. Anderson, J.D., Fundamentals of Aerodynamics, 5th Edn, 2005, McGraw Hill, New York. British Airways. 2009/10 Annual Report and Accounts. 2011. [cited 2013 June 3rd]; Available from: http://www.britishairways.com/cms/global/microsites/ba_reports0910/financial/notes/note14.html Airbus.Airbus Aircraft Average List Prices 2013. 2013. [cited 2013 1 April]; Available from: http://www.airbus.com/presscentre/corporate-information/keydocuments/?eID=dam_frontend_push&docID=14849 Airbus. AIRBUS FOR ANALYSTS. 2013 [cited 2013 1 April]; Available from: http://www.airbus.com/tools/airbusfor/analysts/?contentId=%5B_TABLE%3Att_content%3B_FIELD5 3Auid%5D%2C&cHash=22935adfac92fcbbd4ba4e1441d13383 The Economist. LNG A Liquid Market. The Economist 2012 March 31st 2012 [cited 2012 March 6th]; Available from: http://www.economist.com/node/21558456. U.S Energy Information Administration, The global LNG market - status and outlook., 2003. TIAX U.S. and Canadian Natural Gas Vehicle Market Analysis: Liquefied Natural Gas Infrastructure. 2013.
102
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-102
Case Studies for Concurrent Engineering Concept in Shipbuilding Industry Kazuo HIEKATAa,1 and Matthias GRAU b a The University of Tokyo, Tokyo, Japan b PROSTEP AG, Darmstadt, Germany
Abstract. In shipbuilding process, design information is generated in each phase to shape products and operations. For each process the design activities are carried out with a high level of concurrency supported by various computer software systems, though quality of products and efficiency of the concurrent development process highly depend on experiences and insights of skilled experts. The detailed design information is difficult to be shared and design conflicts are solved in a common effort by design engineers in downstream design stages. Data sharing across design sections and simulation of the construction process to predict time and cost are the key factors for concurrent engineering in shipbuilding industry. The concurrent engineering process in shipbuilding will be getting more and more accurate and efficient along with the accumulated design knowledge and simulation results. This paper reviews shipbuilding product creation process and demonstrates practical usage through typical, comprehensive use cases from design and manufacturing. Keywords. Industrial case studies, shipbuilding, design process, information systems
Introduction In shipbuilding industry, the process of the operation is very complex and a shipbuilding project from inquiry to delivery lasts quite long, about two years and more. The price of large tankers or bulk carriers is around several million USD up to ten million. Passenger ships, LNG carriers or offshore structures are much more expensive. Characteristic of shipbuilding is the huge volume of supplied material that needs to be procured and managed in addition to the design and construction process during shipbuilding projects. To present the importance of the concurrent engineering concept in shipbuilding, this paper illustrates the ship building process first. Additionally, related works are reviewed to show problems many shipbuilders are confronted with and show several case studies. Basic shipbuilding design process is described illustrated in Figure 1. The detailed process can be found in [1]. The basic process is very similar to other manufacturing industries which produce products for individual customers. Design and manufacturing information are required and often a design is reused together with the manufacturing information such as shop floor drawings etc. Decisions in design process generate
1
Corresponding Author.
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
103
design data and following downstream process will be proceeded based on those design data. Amount of design data
Concept Preliminary
Basic Design
Detailed Design
Production Design
Production
Generate detailed design and manufacturing information for all the parts in earlier design stage
Time
Figure 1. Standard design process and increase of design data in time history.
In concept and preliminary design, the basic specification is provided by the customer and the designers have to create a basic plan for the bidding. As the customer is interested in the cost for purchasing a vessel, the shipyard has to estimate the accurate cost for the delivery of the ship. A highly accurate estimation of the production is important for winning the bidding and improving the profit rate for the delivery of a ship. Skills of the estimation based on deep technical knowledge are required. However, gathering data of past projects for simulations by commercial software systems are getting more and more important to accurately estimate the current costs. In preliminary design, designers work on key drawings such as GAP (General Arrangement Plan), Lines and Midship section drawings. GAP is a key drawing for defining basic dimensions, capacities and so on. Lines drawing defines the hydrostatic performance by describing the shapes of the hull with curved surfaces. Midship section drawing is a drawing for the most important part for the approval of the structural strength of the ship. Key performance parameters such as speed, fuel consumption, stability, basic structural plan, main engine and other key equipment are determined in addition to the three key drawings. Ship capability and key performance are confirmed during the basic design process and a revised basic design will be provided for the contract. In the following detailed design phase, the detailed feature of the product is defined. As an example, the drawings developed in the detailed design phase could present handles for valves, small stiffeners, steel plates with curvature for the hull and purchased products. There are not so many differences from other manufacturing industries in detailed design. One characteristic of shipbuilding might be that most of the parts for the ship hull and structures are made by cutting steel plates. Thus, a definition of standard parts is difficult. The number of parts is in the range of 100k to one million for a ship, so this might also be a characteristic of shipbuilding. Depending on the construction process, the design model defined during the detailed design phase
104
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
may or may not depend on the manufacturing facilities. In production design, some of the drawings might be instructions for workers in shipyards or considerations for the manufacturing process such as margins. This phase may not include design trade-offs; the phase is a kind of planning for an optimal manufacturing. The drawings do not only show shapes, dimensions and specifications of the parts, but also how to make parts or fabricate assemblies. To construct a complete ship in dry docks, the whole ship hull is divided into building blocks to fit in the manufacturing facilities and capacities. Owing to the limitation of the manufacturing facilities, the production design can vary even for the same ship with the same detailed design model. The manufacturing information in shipbuilding considers the large deformation of steel structures by the welding process during fabrication. To shorten the lead time, the whole process goes on in concurrent manner. Detailed structure or outfitting design cannot wait for the final design of the upstream process. Software systems for design and construction are implementing a lot of features and trying to provide integrated environments to facilitate the concurrent engineering process; though they used to be standalone systems such as CAD or numerical control systems. To improve the efficiency of the shipbuilding process and handle the huge number of materials, PLM, ERP and more sophisticated software are more and more used by shipyards. There is a tendency to employ new integrated information systems in shipyards although the limitations arising from legacy design data and manufacturing facilities still exists. In this paper, the general shipbuilding process is described and details of two case studies are illustrated. Finally the future trend of concurrent engineering is discussed.
1. Ship Design and Construction Process Several efforts to improve the ship design and construction process are reviewed in this chapter. The improvements and efforts relate to the software system and the concurrent engineering concept.
1.1. Concurrent Engineering in Early Design Phase Concept, preliminary and basic design phase are considered as early design stages. Literatures for these phases will be shown here. The purpose of the concept and preliminary design is to support the bidding process. Detailed information is not required during this design phase, nevertheless the shipyard should know the cost for the materials, man hour, major purchase equipment and the feasibility of the delivery date along with the ongoing projects. The concept and preliminary design must meet the customer’s requirements and, at the same time, have to be an optimal design solution for the shipyard in terms of constructability. The shipyard has to make a design proposal considering many trade-offs in the shipyard capabilities. Speeds, fuel consumption, hydrostatic performance, selection of main engine, strength of structure and construction weight are parts of the considerations in design. International regulations by international maritime organizations and loading facilities in ports might be limitations for the design work. Even today, to achieve a balance in trade-offs, this phase of design process highly depends on the human skills. Therefore shipyards have to assign talented and capable people to the concept and preliminary design phase
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
105
because this phase has a huge impact on the whole cost and schedule. Meijer [2] focuses on the pre-contract scheduling problem and captures the knowledge of experts for the process. Production scheduling tasks in the pre-contract phase are based on knowledge and experiences. The knowledge captured is, for example, detailed configurations of manufacturing facilities to optimize the turnover of the building dock. NAPA is a software company providing a suite of software for ship design and NAPA software facilitates the utilization of 3D design models in the preliminary design phase [3]. The complex interactions, such as hydrostatic performance and compartment plan, will be calculated in the software. Each employs many types of solvers in the basic design phase [4]. CFD, evacuation simulation, structural analysis, vibration and acoustics are shown. Papanikolaou shows a multi-objective optimization of ship design case study [5]. The integration efforts for CAD system and engineering software are also active. Bons introduced the latest status of MARIN’s software [6]. Hydrodynamic design tools are a kind of standalone software because of their specialized purpose. The integration of third party software framework enables specialized soft-ware tools for hydrodynamic to be applied to the early design stage. Ginnis integrate their in-house wave-resistance solver with CATIA to improve the efficiency to hull optimization [7]. As for structural design, Shibasaki utilize a 3D design model for structural analysis within an early design stage [8]. The advantages of a plenty of design and production data for the downstream process are illustrated by Nakao [9]. The quality of design model has an impact on the concurrency, quality and efficiency of the downstream process. 1.2. Detailed Design and Construction Process All the drawings of parts will be provided in detailed design phase. The complete details of the product will be defined. Requirements for the ship operating companies are satisfied in the basic design, and the usability for the crews of the vessels depends on the quality of detailed design phase. Arrangement of small handling equipment like handles and valves is important for daily operations, so ship owners send supervisors to the shipyard to check and revise the detailed feature of vessels for improving usability. The cost for supervising work varies by the quality of the shipyard, so it is important to win an order in shipbuilding industry that the shipyard gains confidence with the customers in technological aspect though bidding price is important. The drawings contain detailed structural members, piping, pipe support and insulators, machinery outfitting, electrical outfitting and many other details. The detailed design section also work on the selection of the purchase of the outfitting equipment. Structural design and outfitting design teams work on the same hull and often have to solve the interference across the design sections. The structural design team doesn’t want other team to make a hole for pipes and the outfitting team needs a hole for efficient routing of pipes or cables as shown in Figure 2.
106
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
Figure 2. Detailed design process with coordination across the design sections.
After production design phase all drawings will be prepared. The drawings include not only shapes, dimensions and specifications of the parts, but also how to make parts or fabricate assemblies. To construct a final ship in dry docks, the whole ship hull will be divided into building blocks to fit in the manufacturing facilities and capacities. Because of the large deformation of steel structures by welding process during the fabrication, margins are carefully defined for each part considering the deformation. Dimensions and shapes of the parts are defined in detailed design phase, but the actual dimensions considering the deformation during the fabrication process will be shown in production design stage. Production will go on based on the production design. Basically all the procedure defined in the production design phase will be completed in production, then the final product will have finished. Problems may be occurred even in the production process. Ideally, the production section just constructs a ship based on the information defined in the prior stage. However, there are still many factors to fluctuate the schedule of the shipyard. Delays in production schedule according to weather conditions, late delivery of purchases and unexpected problem in resource or facility often happens in practice. Moreover reworks caused by the inconsistency and mistakes in detailed or production design stage will also happen in the production phase. Carrying and installing large equipment into the ship building blocks is done in complicated procedure because of the limitation and the transitions of work area along with the progress of production. The downstream process such as detailed design, production design and production should be straight forward in concurrent manner, but the coordination is still required.
2. Case Studies Historical data and simulation technology are potential solutions for the problems caused by the complexity of the design and production process. Two case studies about data management and simulation are shown in this chapter.
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
107
2.1. Data Management in Production Process Recently, some accuracy evaluation systems using measured data of assemblies obtained from laser scanners are proposed. Laser scanners measure the whole surface of the members as point cloud data. Measured data can be used for evaluation, checking the accuracy of shipbuilding blocks [10] or surfaces of shell plates [11]. The measured data and evaluation results have much information content, so these data are expected to help to discover knowledge about the manufacturing process. However, in most shipyards, the search and reuse of evaluation results is difficult because large amounts of accuracy information are stored without an adequate data management. It is recommended to employ a data management system using measurement data and accuracy evaluation results of shipbuilding assemblies gauged in the manufacturing process. The proposed system has three functions: (1) accuracy evaluation, (2) accuracy data accumulation, and (3) search and reuse of accuracy data. The objective of this study is to build a method for identifying knowledge, know-how and techniques in the field based on the data managed by the developed system and evaluated by the three dimensional measured data in the ship construction process. The overview of the whole system is shown in Figure 3. All types of data stored in the database and, as well, the metadata is assigned to them. Any data stored in the system can be reachable efficiently thanks to the metadata
Figure 3. Overview of data management system.
In the accuracy evaluation system, the accuracy of assemblies is evaluated as gaps between measured data and design data. In the accuracy data accumulation system, measured data, design data and evaluation results are stored. The metadata is attached to each data in RDF format [12]. Measured data can be retrieved by using RDF metadata. Search results are shown with metadata to facilitate the users to work on the measured data and accuracy evaluation results easily. Some evaluation results stored in the proposed system are shown in Figure 4. This figure gives an overview of the shipbuilding blocks and the deformation of internal structural members calculated from accumulated measured data by laser scanners. The vertical axis of the graph is offset
108
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
along with the depth of blocks and the horizontal axis corresponds to the width. Two measured data are retrieved in a three months interval, however the same tendency of the deformation can be found. The findings obtained by these comparisons can be utilized for redesign of the manufacturing process. The accumulation of the data will enable shipyards to do this kind of analysis easier and avoid uncertainties in the production process.
Figure 4. Deformation in Fabrication Process.
2.2. Simulation of Production Process Simulation technique is crucial to predict the following process. This case study proposes a methodology to evaluate organizational performance based on the research of [13]. The developed system defines workers, facilities, activity models and production strategy. The evaluation of organizational performance is done through the following processes: (1) create the enterprise model and strategy, (2) calculate work plan by optimizing the weights of each strategy, (3) compare the basic scenario to the scenario when a situation changes. The system proposes a work plan by genetic algorithm. The plan minimizes the total cost in doing the work activities considering the weight of each production strategy. The proposed methodology is applied to some sample scenarios in a fabrication shop. Results show that the methodology can evaluate organizational performance successfully by analyzing the work plan. In addition, the methodology also evaluates the effect of improving organization and sudden trouble quantitatively. Figure 5 shows the overview of the proposed method.
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
109
Figure 5. Deformation in Fabrication Process.
Initially, an enterprise model is developed based on the workers and facilities in an organization including the different work activities and skills set and the production strategy is made by setting each parameter. Next, the optimal work plan for the enterprise model is calculated by designing the parameters of the production strategy. Finally, the organizational performance is examined by evaluating the optimal work plan and parameters of the production strategy. The skill set is a class of skills needed to perform the various activities in an organization. Workers, facilities and tasks in some activities are defined by the skills in this set. The organization model is composed of workers, facilities and their capabilities or skills. Workers and facilities are defined by their costs and the presence or absence of skills in the set. This method is evaluated in the fabrication process of simple panel structures in the case study. The process model is shown in Figure 6. The simulation scenario is that 11 workers using 6 facilities are working on making 10 panels. The result is also shown in Figure 7 in Gantt chart format. The weight vector for strategy in assigning the activities to workers is also obtained. This simulator shows that the job allocation strategy will change from cost saving to first-in first-out to keep the delivery date in case of the resource shortage.
110
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
Figure 6. Process Model for Fabrication of Simple Panel Structure.
Figure 7. Calculated Schedule by Proposed System.
3. Future Directions for Concurrent Engineering in Shipbuilding The design engineers have to consider complex and concurrent processes of shipyards. The same situation can be seen in the following basic design phase. To manage and predict the complex and concurrent shipbuilding process, basically two types of efforts are proposed for the early design stage. The first approach is to accumulate design and construction experiences. In the early design phase, design engineers work based on similar projects. The designers identify the differences between the past design and the new requirements and estimate the impact on the new design. The second approach is simulation. Production scheduling and performance measures of the ship (such as fuel consumption) are vital for bid creation Whereas during the concept design phase the focus is set on production scheduling, ship performance is the key topic in the preliminary design phase. The basic design focuses on defining the parameters for the product to meet the requirements. Though the trade-offs of design parameters across
K. Hiekata and M. Grau / Case Studies for Concurrent Engineering Concept
111
the design sections are taken into account in the prior stage, negotiations based on the actual design start in this phase. The case studies show how to use the accuracy measurement data and simulation results. For the concurrent engineering deployment in shipbuilding, data accumulation and analysis and simulation technique will be key technology.
References [1] Storch, RL, Hammon CP, Bunch HM (1988) Ship Production, Cornell Maritime Pr/Tidewater Pub, 1988 [2] Meijer K, Pruyn J, Klooster J (2009) Early Stage Planning Support. In Proceedings of International Conference on Computer Application in ShipBuilding 2009, Vol.2, Paper 23 [3] Kuutti I, Mizutani N, Kim HS (2011) Efficient Integration of 3D Design With Engineering at the Early Design Stages. In Proceedings of International Conference on Computer Application in ShipBuilding 2011, Vol.2, Paper 1 [4] Fach K, Bertram V, Jefferies H (2009) Advanced Simulations for Ship Design and Redesign. In Proceedings of International Conference on Computer Application in ShipBuilding 2009, Vol.2, Paper 8 [5] Papanikolaou A, Zaraphonitis G, Harries S,Wilken M (2011) Integrated Design and Multiobjective Optimization Approach to Ship Design. In Proceedings of International Conference on Computer Application in ShipBuilding 2011, Vol.3, Paper 4 [6] Bons A (2009) QSHIP; Advanced Use of Hydromechanics in Early Design Stage. In Proceedings of International Conference on Computer Application in ShipBuilding 2009, Vol.3, Paper 6 [7] Ginnis AI, Feurer C, Belibassakis KA, Kaklis PD, Kostas KV, Politis CG (2011) A CATIA(R) ShipParametric Model for Isogeometric Hull Optimization With Respect to Wave Resistance.In Proceedings of International Conference on Computer Application in ShipBuilding 2011, Vol.1, Paper 2 [8] Shibasaki K, Nishimura Y (2009) Utilization of 3D-CAD System at Early Design Stage and Powerful Interface Between 3D-CAD and FE-Analysis. In Proceedings of International Conference on Computer Application in ShipBuilding 2009, Vol.1, Paper 7 [9] Nakao Y, Hirai K, Hirayama T, Ito K (2011) High Precision Basic Design. In Proceedings of International Conference on Computer Application in ShipBuilding 2011, Vol.2, Paper 5 [10] Hiekata K,Yamato H, Enomoto M, Kimura S(2011) Accuracy Evaluation System for Shipbuilding Blocks Using Design Data and Point Cloud Data. In: Improving Complex Systems Today, Proceedings of the 18th ISPE International Conference on Concurrent Engineering, D. D. Frey et al. (eds.), Springer-Verlag, London, p. 377 - 384, 2011. [11] Hiekata K, Yamato H, Enomoto M, Oida Y, Furukawa Y, Makino Y, Sugihiro T (2010) Development of Accuracy Evaluation System of Curved Shell Plate by Laser Scanner. In: New World Situation: New Directions in Concurrent Engineering, Proceedings of the 17th ISPE International Conference on Concurrent Engineering, J. Pokojski et al. (eds.), Springer-Verlag, London, p. 47-54, 2010. [12] Klyne G, Carrol J (2004) Resource Description Framework (RDF): Concept and Abstract Syntax, W3C Recommendation, http://www.w3.org/TR/rdf-concepts/, Accessed 15 Nov 2013. [13] Mitsuyuki T, Hiekata K, Yamato H, Haijima K (2012) A Study on Evaluation of Organizational Performance considering the Workers and Facilities. In: Concurrent Engineering Approaches for Sustainable Product Development in a Multi-Disciplinary Environment, Proceedings of the 19th ISPE International Conference on Concurrent Engineering, Stjepandić J, Rock G, Bil C (eds.), Springer, London, p. 533-544, 2013.
112
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-112
The Sources and Methods of Engineering Design Requirement a
Xuemeng LI a,1 , Zhinan ZHANG b, Saeema AHMED-KRISTENSEN a Department of Management Engineering, Technical University of Denmark, Denmark b School of Mechanical Engineering, Shanghai Jiao Tong University, China
Abstract. The increasing interest in emerging markets drives the product development activities for emerging markets. As a first step, companies need to understand the specific design requirements of a new market when expanding into it. Requirements from external sources are particularly challenging to be defined in a new context. This paper focuses on understanding the design requirement sources at the requirement elicitation phase. It aims at proposing an improved design requirement source classification considering emerging markets and presenting current methods for eliciting requirement for each source. The applicability of these methods and their adaption for emerging market is discussed. Keywords. Design requirement source, emerging markets, classification
Introduction Design requirement is commonly accepted as a description that defines what the product should do (not how to do) and set up the boundaries to product solution space [1]. Defining and expressing the design requirements is normally the initial step for a product development project. Design requirement identification is an iterative process which co-evolves with product development process. Deficiencies in requirements could lead to the waste time and money and even the failure of the project ([2] cited from [3]). Hence, it is important to define the requirements correctly from an early stage. Efforts have been devoted to descriptive research for understanding the practice, and prescriptive methods and theories development in terms of improving the quality of defined requirement set (specification) [4]. Jiao and Chen [5] summarized a general requirement management process (Figure 1), which included three phases: requirement elicitation, analysis, and specification. The outcome of each phase contributed to the functional requirements (product specification).
Figure 1. Customer requirement management process [5] 1
Corresponding Author: Xuemeng LI, Building 426, Technical University of Denmark, 2800 Lyngby; E-mail:
[email protected]
X. Li et al. / The Sources and Methods of Engineering Design Requirement
113
In addition, the manufacturing industry’s interest in emerging markets has been increasing dramatically. However, it is recognized that emerging markets (e.g. India, China, and Brazil) have different social, cultural, political and economic context from those of western companies previously established markets (e.g. [6]). Globalising a successful product development to emerging markets acquires specific design requirements from the local market. The multicultural factors can be challenging for companies to elicit requirements especially from external sources which are grounded in the local context. It makes the elicitation and management of design requirements become more critical to the success level of product development [7]. However, the literature review revealed that only a few studies investigated the sources of design requirement. Most articles referred to some sources (e.g. customer and regulation) but not complete overview of all sources. Therefore, it highlights the need for the research to understand design requirement sources for this new context. This paper focuses on discussing the sources for eliciting design requirements. The goal is twofold. First, to propose a design requirement source classification which is based on a review of literature and improved with respect to emerging markets; second, to present current methods for eliciting requirements according to the classification. The applicability of current methods in emerging markets is briefly discussed and future studies are proposed.
1. Design requirement source from literatures Design requirements concern complex constrains and conditions and call for comprehensive information from multiple sources. An overview of all the possible sources can contribute to the completeness of design requirement elicitation. In addition, the traceability of information sources enables the team to understand the reason for certain decisions ([8] cited from [9]). Sudin [10] identified a list of design requirement sources based on interview analysis, in which the sources were sorted into two groups: x Human: Client, end user, market analysis report, colleagues, the designers’ expected solution, designer’s own requirement. x Artefact: semi-developed specification, proposed solution, existing product, previous project, design guideline, user guidelines. Other studies also suggested colleagues, customer, document, other departments (i.e. sales department, marketing and manufacturing) ([11] cited from [10]), customer, user, supplier, written material (i.e. book, trade journal, technical manual) ([12] cited from [10]). Gershenson and Stauffer [13] proposed a taxonomy that clarified four different sources from which the requirement could be generated, i.e. end user, corporate (the producer itself), technical (mother nature) and regulatory requirements (society), see Figure 2. The taxonomy could guide the development of design requirement by gathering, analysing information about each category and transforming it into design requirements [14].
114
X. Li et al. / The Sources and Methods of Engineering Design Requirement
Figure 2. Requirements cube showing the various types of requirements and how the information fits into the product definition process [13]
2. Research method The paper took the design requirement taxonomy established by Gershenson and Stauffer [13, 15, 16] as a basis. The improvement in the proposed classification was addressed by synthesizing referred sources in recent publications. 48 papers have been published since the year 2000 on journals in engineering design field, including Design Studies, Research in Engineering Design, Journal of Engineering and Concurrent Engineering-Research and Applications etc. The review started with relevant papers from those and two design requirement reviews [4,5]. Important references in above papers were also included in the review. Information about where requirements come from when a company establishes or changes design requirements was labelled and grouped in affinity diagram. The presented requirements elicitation methods were selected based on the two reviews or from influential engineering design books (e.g. [17] and [18]).
3. Design requirement source classification A new context of emerging markets can affect requirements. When eliciting design requirements, the project team interacts with many factors (e.g. stakeholders and documents) frequently both from the internal company mechanism and external environment in order to collect a thorough set of requirements. The quality of information that comes from the external sources is particularly challenging to be controlled due to the evident cultural, linguistic, and geographic barriers in emerging markets. Thus, it differentiates the design requirements for emerging markets from that in western context when its internal mechanism is assumed to be relatively stable. From the review a model (Figure 3) is proposed describing the relationship between the company frame (internal/external) and three main factors (i.e. Corporate, Technology, and Society/Environment) that influence design requirements. x Corporate: the company itself. It concerns the company’s organisational structure, strategic vision and available resources etc.
X. Li et al. / The Sources and Methods of Engineering Design Requirement
x
x
115
Technology: as defined by Gershenson and Stauffer [13], technology presents the knowledge of e.g. engineering principles, material properties and physical laws. These are regarded as an internal factor because the technical requirements make sense when relevant knowledge was known to the company. Society/Environment: all considerations of social and environmental aspects that out of the company’s frame e.g. end users, infrastructures, and regulations. It is the most complex factor and could be extended to several subcategories.
Figure 3. What influence design requirements?
It should be noticed that the distinction between internal and external is not absolute and static; instead it is relative and dynamic. For example, production may be internal or external depending on the company structure. The requirements from different sources are not isolated but interconnected with each other. The resources flow constantly between the internal mechanism and external environment e.g. a company could recruit new employees and cooperate with organisations to gain new knowledge. Based on this, a classification of design requirement sources is proposed with seven categories: corporate, technology, user, market competition, regional infrastructure, organizational infrastructure, and regulation. Table 1 displays the categories and examples found in literatures. The seven categories are explained in the following sections with brief presentation on methods used to elicit requirements. 3.1. Corporate Requirements generated from the corporate category form the company’s space for creating product solutions. The corporate category describes internal factors within a company. It concerns both the people and activities in the company, for example departments, individuals (e.g. designers [10,22,23]), strategies and documental guidelines [10]. The corporate requirements were prioritised after safety issues and statutory regulations and customer product requirements by Lee and Thornton [21].
116
X. Li et al. / The Sources and Methods of Engineering Design Requirement
When entering emerging markets, the corporate is assumed to stay the same in different context unless the globalisation has an impact on its organizational structure. Two aspects from this category have been frequently mentioned, namely platform requirements [27] and requirements from existing products [10,23]. Platform requirements (relevant research could be found in [19]) or portfolio management (e.g. [ 20 ]) outlines the strategic vision to develop the product. The requirements for developing a new product can be generated from the information accumulated from existing products [23]. Table 1. Design requirement sources classification
Category Corporate
Technology
Society/ Environment
User
Market competition
Regional infrastructure Organisational infrastructure Regulation
Term used in references (not all references were listed) Corporate [13, 21] Designer [10, 22, 23] Colleague[10] Guideline [10] Technical [13] New technology trend[23] Nature law [24] End user [10,13,25] Customer [21, 26,27,28] Client [10] Competitor situation [27] Marketing [10] Competition [23] Regional infrastructure [14, 29] External stakeholder [3] Regulatory[13] Regulation[14,21] Legal requirement[27]
3.2. Technology The technology category consists of scientific and engineering knowledge, e.g. engineering principles, which can be disseminated through experience and books. These requirements keep more or less the same in different markets, which is closely related with the companies’ professional expertise and knowledge learning ability. 3.3. User This category is defined to include both end user and customer/client, i.e. all relevant individuals who would buy or use the product. It is no doubt the most critical and most frequently mentioned source for design requirement (e.g. [26], [30] and [31]). User requirements are often ambiguous and contained most obscure and latent requirements to be investigated, which become even more challenging when entering a new market. Diverse culture and social identities shape the user habits and the way users think and understand the products differently. Additionally, in emerging markets, the mid- and lower end of the market is recognised as the most significant and dynamic [37]. A number of methods have been used to study users, for example interviews [17,18, 32 ,], focus groups [17, 18, 32,], surveys [18, 32], observations [17, 32],
X. Li et al. / The Sources and Methods of Engineering Design Requirement
117
brainstorm [18] scenario [33, 34], ethnographic studies [18], and customer complaints and warranty data [18]. User requirements should be weighed and prioritised to optimise the trade-off with requirements from other sources. The basic way was to rate each requirement [17] through calculating the importance based on collected data or scoring by users in new surveys [32]. Maslow’s hierarchy (e.g. [35]) categorised human need into five levels: physiological needs, safety needs, love and belonging, esteem and self-actualization, which helped to define the target group in the markets. The higher level needs came up only if the lower level needs were fulfilled. Kano model illustrated three types of user needs [36], which had different prioritisations: x Must be need: is the basic criteria of a product. If not fulfilled, users would be extremely dissatified; if fulfilled, users’ satisfaction would not increase. x One-dimensional need: user satisfaction was proportional to the level of fulfillment. x Attractive need: once fulfilled, user satisfaction increased dramatically. 3.4. Market competition This category defines requirements from the market. The competition with other competitors is one of the main concerns. It includes the perceptions gained from marketing [10] or marketer [23]. Analysing the competitor situation [27] is of particular importance in emerging markets. The competition could be even fiercer than the company’s home market because of the huge number of local fast followers [37] and the globalisation barriers. Benchmarking [38, 39, 40] was technique for gaining and maintaining competitive advantages. It enables the comparison and analysis of performance data between the new product and successful products in the market [41]. Functional decomposition supported the capture of the category, since it was more easily to design functional modular than a complete complex product [4]. Functional analysis system technique (FAST) diagram [ 42 ] supported the product function analysis by revealing its functionality as a hierarchy. 3.5. Regional infrastructure Regional infrastructure concerns the infrastructures needed to support product in the local using context. In many occasions, the products need auxiliary facilities in order to work, which might be out of the company’s own service frame. For instance, many digital devices require Wi-Fi access and an electric car requires chargers installed, these need to be available in the infrastructure of the intended market. The regional infrastructure requirements are often considered as constraints to the product solution space. Only very few literature have been found about generating requirements from the regional infrastructure (e.g. [29] cited from [31]). One assumption to explain this is that regional infrastructures are normally touched upon in user requirement studies due to its influence on the way users behave and use the product. However, it is meaningful to separate it as a single category because of its geographic differences. Generally, the infrastructure in emerging markets is poorer than in western countries and has identified features depending on the context. For instance, in Chinese cities most
118
X. Li et al. / The Sources and Methods of Engineering Design Requirement
people live in high-rises, so the fire extinguishing system should be designed able to reach the high floors. 3.6. Organizational infrastructure This category separates the external part of the organization from the internal corporate structure. It together with the user category covers the external stakeholders [3]. It can include the suppliers, local distributors, external manufacturers (if needed) etc. The specific relevant players were depended on the company’s own case. Methodology of Organizing Specifications in Engineering (MOOSE) [13, 43] was supportive to the requirements extension for corporate and organizational infrastructure (in the methods, those two were not distinguished). It consisted of three levels of requirements: functional level (a functional group of the product lifecycle), task level (tasks that must be done to accomplish the functions), and attribute level (product attributes that effects tasks). By extending the three levels, a thorough list of requirements could be covered. 3.7. Regulation The last category presents the regulations that made by government and authorised organizations. They are critically sensitive for product development and normally have to be fulfilled especially for certain fields such as health industry. Few methods were found to support regulatory requirements. According to Gershenson and Stauffer [16], the regulatory and technical requirements were less problematic for two reasons: 1) they were well documented and easy-access information; 2) they were contextdependent. However, it could be discussed when think about emerging markets, especially for regulatory requirements. First, the information could be tough to find and understand due to the linguistic gaps and lack of knowledge about the local information channels. Second, it requires local network and lobbyist to negotiate on some flexible policies and rules, and get the local approvals. Third, it asks for more attention and awareness to protect the intelligent property in emerging markets. Hence, the more ‘contextdependent’ sources might potentially lead to focused studies under certain specific contexts.
4. Discussion The paper indicates a lack of knowledge in design requirement elicitation for emerging markets. As presented above, user requirements has been the centre of current design requirement studies, whereas few methods have been developed for eliciting requirements from other sources, e.g. regional infrastructure and regulation. Nevertheless, some of those requirement sources are particularly problematic and sensitive when developing product for emerging markets. In addition, the adaption and suitability of those methods require further discussions and studies. First, traditional requirement study takes a long time and a large number of resources. The main work is done before the development phase in product development process. It is particularly risky and not practical in emerging markets because the time of transition and poor protection of intelligent property,
X. Li et al. / The Sources and Methods of Engineering Design Requirement
119
where companies can easily be dragged into the red-sea competition with local competitors. Hence, it is worthy to study on the dynamics and rapidity of design requirement elicitation along with product development process, e.g. the closed-loop of dynamic information flow among all stakeholders through the product’s life cycle. Second, unlike most western countries, one vital feature of emerging markets is the gigantic capacity, e.g. China, India, and Russia. The large database is suitable for quantitative studies and big data analysis. As described in most studies, the sample size is relatively small. However, in emerging markets, it might be possible to adapt those methods to a larger sample. Accordingly, supporting quantitatively analytic methods are requisite. Third, the cultural, social and linguistic differences and the geographical distance obstruct the collection and interpretation of design requirements. Methods are needed to bridge those gaps.
5. Conclusions This paper reviews the source of design requirements and current methods used through a review of literature. The literature review identified a number of sources and methods. However, these were not tailored emerging markets. Therefore, a design requirement source classification with considerations on emerging markets is proposed. Relevant methods used for eliciting requirements from different sources are named and briefly presented. It suggests potential improvements and further development of design requirement for emerging markets. For future work, the proposed classification needed to be validated with industry. Studies are needed on design requirement methods generation, selection, and validation.
Acknowledgement The authors acknowledge the support for this research from the Global opportunities for Danish SMEs in Emerging Markets (GODS for EMs) project (funded by Industriens Fond), the Europe-China High Value Engineering Networks (EC-HVEN) project (EU Marie Curie Staff Exchange), and the National Science Foundation of China (51205247).
References
[ 1 ] I. Sommerville, Software Engineering (6th edition), Boston, MA, USA: Addison-WesleyLongman Publishing Co., Inc., 2001. [2] C. Hales, Ten critical factors in the design process, Safety Brief, 19.1 (2001), 1-8. [3] M.N. Sudin, Understanding the Nature of Specification Changes and Feedback to the Specification Development Process, PhD dissertation, Technical University of Denmark, 2012. [4] M.J. Darlington, and S.J. Culley, Current research in the engineering design requirement, IMechE Part B: Journal of Engineering Manufacture, 216 (2002), 375-388. [5] J.R. Jiao, and C.H. Chen, Customer requirement Management in Product Development: A Review of Research Issues, Concurrent Engineering, 14 (2006), 173-184.
120
X. Li et al. / The Sources and Methods of Engineering Design Requirement
[6] A. Dubiel, and H. Ernst, Success factors of new product development for emerging markets, The PDMA Handbook of New Product Development, 2012, 100-114. [ 7 ] C.H. Chen, L.P. Khoo, and W. Yan, Evaluation of multicultural factors from elicited customer requirements for new product development, Research in Engineering Design, 14(2003), 119-130. [8] H. McAlpine, et al., Key themes in design information management, International Design Conference – Design 2012, 17-20 May, Dubrovnik, Croatia, (2012). [9] W. Brace and V. Cheutet, A framework to support requirements analysis in engineering design, Journal of Engineering Design, 23.12 (2012), 876-904. [10] M.N. Sudin, S. Ahmed-Kristensen, and M.M. Andreasen, The role of specification in the design process: a case study. International Design Conference – Design 2010, 17-20 May, Dubrovnik, Croatia, (2010). [ 11 ] A. Romer, G. WeiBhahn, W. Hacker, M. Pache, and U. Lindemann, Effort-saving product representations in design-results of a questionnaire survey, Design Studies, 22 (2001), 473-491. [ 12 ] A.B. Wootton, R. Copper, and M. Bruce, Requirement capture: where the front end begins? International Conference on Engineering Design -ICED 97, August 19-21, Tampere, 1997. [ 13 ] J.A. Gershenson, and L.A. Stauffer, The creation of a taxonomy for manufacturability design requirements. Proc 1995 ASME Design Techinical Conferences-7th International Conference on Design Theory and Methodology (1995), 305-314 [14] K.S. Rounds and J.S. Cooper, Development of product design requirements using taxonomies of environmental issues, Research in Engineering Design, 13 (2002), 94-108. [15] J.K. Gershenson, and L.A. Stauffer, Assessing the Usefulness of a Taxonomy of Design Requirements for Manufacturing, Concurrent Engineering 7 (1999), 147-158. [16] J.K. Gershenson, and L.A. Stauffer. A Taxonomy for Design Requirements from Corporate customers, Research in Engineering Design, 11 (1999), 103-115 [17] K.T. Ulrich, and S.D. Eppinger, Product design and development, McGraw-Hill, New York, 1995. [18] G.E. Dieter, and L.C. Schmidt, Engineering Design, McGraw-Hill Science/Engineering/Math, 2012. [ 19 ] J.R. Jiao, T.W. Simpson, and Z. Siddique, Product family design and platform-based product development: a state-of-the-art review, Journal of Intelligent Manufacturing, 18.1 (2007), 5-29. [20] R. Cooper, S. Edgett, and E. Kleinschmidt, Portfolio management for product development: results of an industry practices study, R&D Management, 31.4 (2001), 361-380. [ 21 ] D.J. Lee and A.C. Thornton, The identification and use of key characteristics in the product development process. The 7th International Conference on Design Theory and Methodology, Boston, Massachusetts, 1995. [22] W. Brace and V. Cheutet, A framework to support requirements analysis in engineering design, Journal of Engineering Design, 23.12 (2012), 876-904. [23] M. Tseng and J. Jiao, A variant approach to product definition by recognizing functional requirement patterns, Journal of Engineering Design, 8.4 (1997), 329-340. [ 24 ] Z.Y. Chen and Y. Zeng, Classification of product requirements based on product environment, Concurrent Engineering-Research and Applications, 14.3 (2006), 219-230. [ 25 ] C. Durugbo and J.C.K.H. Riedel, Viewpoint-participation-technique: A model of participative requirements elicitation, Concurrent Engineering-Research and Applications, 21.1 (2013), 3-12. [26] B. Morkos, J. Mathieson, and J. Summers, Comparative analysis of requirements change prediction models: manual, linguistic, and neural network, Research in Engineering Design, 25.2 (2014), 139-156. [27] L. Almefelt, F. Berglund, and P. Nilsson, Requirements management in practice: findings from an empirical study in the automotive industry, Research in Engineering Design, 17.3 (2006), 113-134. [28] J. Duhovnik, J. Kusar, R. Tomazevic et al., Development process with regard to customer requirements, Concurrent Engineering-Research and Applications, 14.1 (2006), 67-82. [29] Y. Ito, and K. Höft, A proposal of Region- and Racial Traits- Harmonised products for future society: Culture and mindset- related design attributes for highly value-added products. International Journal of Advanced Manufacturing Technology, 13 (1997), 502– 512
X. Li et al. / The Sources and Methods of Engineering Design Requirement
121
[30] R. Carreira, L. Patrício, R.N. Jorge, and C.L. Magee, Development of an extended Kansei engineering method to incorporate experience requirements in product–service system design, Journal of Engineering Design, 24.10 (2013), 738-764. [31] C. Chen, L. Khoo, and W. Yan, Evaluation of multicultural factors from elicited customer requirements for new product development, Research in Engineering Design, 14.3 (2003), 119-130. [ 32 ] K. Otto, and K. Wood, Product Design: Techniques in Reverse Engineering and New Product Development, Prentice Hall, 2000. [ 33 ] S. Robertson, and J. Robertson, Mastering the requirements process: getting requirements right, Addison-Wesley Professinal, 2012 [34] Z.L. Liu, Z.N. Zhang, and Y. Chen, A scenario-based approach for requirements management in engineering design, Concurrent Engineering: Research and Application, 20.2 (2012), 99-109 [35] M. Block, Maslow’s Hierarchy of Needs, in Encyclopedia of Child Behavior and Development, Springer US,2011, 913-915. [ 36 ] E. Sauerwein, F. Bailom, K. Matzler, et al., The Kano model: How to delight your customers, International Working Seminar on Production Economics. 1 (1996), 313-327 [ 37 ] AsiaNBC Project Report, Asia New Business Creation, from: http://www.asianbc.dk, Universe Foundation, Denmark, 2011. [38] R.C. Camp, Benchmarking, Quality Press, American Society of Quality, Milwaukee, 1995. [39] M.J. Spendolini, The Benchmarking Book, Amazon, New York, 1992. [40] M. Zairi, Effective Benchmarking: Learning from the Best, Campman & Hall, New York, 1996. [41] C. Sylvia, Benchmarking, Gower Publishing Limited, Hampshire, England, 1998. [42] J. Fox, Quality through Design, McGraw-Hill, London, 1993. [43] J.A. Gershenson, D. Khadilkar, L.A. Stauffer, Organizing and managing customer requirement during the product definition phase of design. 1994 ASNE Design Technical Conference-6th-International Conference on Design Theory and Methodology (1994)
122
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-122
A Method for Identifying Product Improvement Opportunities through Warranty Data a
Marcio R. BUENOa,1 and Milton BORSATOa Federal University of Technology – Parana, Av. Sete de Setembro 3165, Curitiba-PR, 80230-901, Brazil
Abstract. Demands imposed by the consumer market and the need to remain competitive have motivated companies to develop products more technologically complex, by adding new functions and continuously seeking to improve manufacturing processes. At the same time that it is vital for a company to innovate and improve its products, the wise use of engineering resources and increasing costs associated with engineering changes, especially the poorly planned ones, have to be considered. The aim of this paper is to propose a decision-making support method for identifying opportunities for product improvement, based on data extracted from warranty records. The method uses field data, which are analyzed under six different perspectives of failure modes. A company in the auto parts industry has partnered in the present research to provide the context for an application example. The proposed method contributes for the effective planning of engineering capacity, by prioritizing engineering changes with customer-driven value and/or correcting product portfolio management based on market input. With more reliable data, the engineering change decision-making process becomes more effective. Furthermore, the method may help: set the right design requirements for delivering improved products into a particular market; plan for resource allocation on different projects; and highlight critical issues to be dealt with. Keywords. Engineering Change Management; Warranty data; Product idea generation.
Introduction Changing a product represents a large effort and consumption of valuable engineering resources. Resource consumption is even greater when a given engineering change is not properly managed. Engineering change activities typically consume one-third to half of the capacity of an engineering team [1]. Therefore, a large number of product changes and the fact that even small changes often result in significant costs and delays in development and production, make the ability to effectively manage these aspects a key success factor of the overall product development process [2]. Previous research with focus on innovation suggests an extensive list of idea sources for Engineering Changes, such as customers, competitors, universities, suppliers, other divisions within the same company, consultants, among others [3]. 1
Corresponding author. Tel.: +55-41-9878-1810; e-mail:
[email protected]
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
123
However, the cost for generating ideas in each of these sources is quite variable. Costs to identify opportunities for product improvement based on warranty data is relatively low compared to other sources for generating ideas, pondering that companies already have a related warranty claims control. Most companies maintain databases of warranty for financial reporting purposes and expenditure forecasts warranty. In some cases, there are attempts to extract engineering information (e.g. about the reliability of components) from such databases[4]. However, the use of such databases as an input for product improvement has been overlooked. Hence, the purpose of the present research is to present a method for identifying opportunities for product improvement based on data extracted from warranty records. This has the potential of contributing for idea generation in redesign works. The method establishes a workflow and identifies flaws in different perspectives. The application of such a tool may assist in decision making for effective implementation of engineering changes. This paper is structured as follows: a theoretical background is presented in section 1; section 2 presents an overview of the proposed method, brings the context of the application example, presents a step-by-step description of the application example and a short discussion. Concluding remarks are presented in section 3. 1. Theoretical Background The fundamental concepts covered in this section are Engineering Change Management and warranty data. 1.1. Warranty Data In recent decades, the role and importance of product warranty coverage has changed significantly. Currently, most products are sold with some form of warranty, for it is considered important both for manufacturers and customers [5]. In the customer’s perspective, the product warranty provides information about the reliability and quality of the product. Thus it serves as a kind of insurance policy in case of failure in the early stage of the lifecyle of a product. From the point of view of the manufacturer, the product warranty protects customers from undue claims, as well as it works as a marketing promotional, by differentiating a given product from similar ones manufactured by competitors. Warranty claims databases of manufactured goods keep records of complaints and information about concomitant factors [3]. If built and maintained properly, warranty databases can be used for a variety of purposes, including forecasting future failures, comparison of record claims for different product groups, estimated field reliability, and identifying opportunities for improvement in quality and reliability. Although it is common to use the terms “warranty data” and “warranty database” interchangeably in most applications, inferences about the reliability of a product from field data require information from two different databases. One database contains production information, providing a unique identification number (e.g. an identification number, also known as a VIN - Vehicle Identification Number), or product serial numbers, date and time of manufacture, the assembly line identification and other manufacturing related data. For some products (e.g. automobiles), such a database may also contain the date of sale [4].
124
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
Often there is lack of information in warranty databases, where the exact number of units that had the problem or the correct mileage of the vehicle that failed are unknown, or information about certain failure are recorded only for the specific units that presented problems and not for all related units [6]. Additionally, the databases are, for the most part, developed to account for complaints, and not designed for the statistical evaluation of these records. For statistical analysis to be conducted, it is necessary to acquire information from complementary databases, such as a manufacturing database. 1.2. Engineering Change Management Several different definitions for Engineering Changes Process (Engineering Change Process - ECP) can be found in the scientific literature. They vary according to the context of the phenomenon. Some authors consider that the process of implementing engineering changes only exists for a product that is already being manufactured in scale. For example, a change in one component of a product after the product goes to production is subject to Engineering Change Management (ECM) [7]. Thus, ECM is the process of organizing, controlling and managing the workflow and information for Engineering Change [8]. An important factor that influences the speed and quality of ECM is the propagation of change. It is the result of the connection between the component that is modified and its interface components. Parts and systems, especially on complex products can be highly interconnected, depending on how the products are modularized. Automobiles, aircrafts, boats or helicopters have extremely complex connections between its components. Changes in one part or system can have a propagation effect on other parts or systems. In other words, the way a system is structured affects how a part or system reacts to the change. An effective approach for managing engineering changes must provide functionality that: (i) tracks the impact of change on the elements of the product structure; (ii) identifies the people to be informed, both within the enterprise and across enterprise boundaries; (iii) determines a reasonable sequence of people to be notified; and (iv) follows a workflow for approval (sorted by activity) with the participation of everyone involved or affected by the change [9]. 2. Methodological Aspects The next sections present how the proposed method has been conceived, from opportunity detection to method conception and proof testing by means of an application example. 2.1. Opportunity The present research was motivated by the challenges encountered in complex situations that require decision-making on the implementation of changes for redesign at a partner company, from the auto parts industry. The assessment on how the company could possibly identify opportunities for product improvement from warranty databases nurtured the need for methods to support the task. In the partner company, one could easily observe the difficulty of the engineering teams for quantifying certain occurrences of failure in order to prioritize critical issues,
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
125
do capacity planning and review project requirements. For this matter, interviews with designers were planned and carried out for the purpose of collecting information about the current problems and understanding the situation properly. 2.2. Method Outline Based on observation and interviews, the general steps in the process of identifying opportunities for engineering changes were defined as well as the expected outcomes in each step. The flowchart presented in Figure 1 shows a logical sequence of tasks that are comprised by the method. It highlights six basic elements, named blocks. 1. Begin of Method
2. Analyze field performance of product though warranty data 3. Identify oportunities for product improvement 3.1. (P1) Identify and fix deviations related to company process
3.2 (P2) Identify main failures caused by customer process
3.3. (P3) Identify concentration of failures on customer
3.4. (P4) Identify concentration of failure on manufacturing plants
3.5. (P5) Identify concentration of failure on part number
3.6. (P6) Identify concentration of failure on geographic region
4. Define/implement necessary engineering changes
5. End of Method
Figure 1. Method outline.
In Block 2, the analysis of field product performance with warranty data is carried out. In Block 3, project requirements are analyzed and/or reviewed. In Block 4, the opportunities for product improvement are identified through comparison of design requirements with product performance in the field. After this step, engineering changes can be defined, evaluated and implemented. 2.3. Application Example In the present work, three real cases at the partner company were investigated. In each case, opportunities for product improvement were found by evaluating and detecting patterns of failures under six different perspectives, based on the stratification of data from field warranty records: (i) origin of complaints; (ii) main problems caused by customers; (iii) influences of customers’ processes and applications; (iv) influences of product manufacturing; (v) influences of components’ use versus part number; and (vi) geographic location of the fault and application environment. The evaluation and identification of failure patterns seek to determine if a fault is related or concentrated in one of the perspectives mentioned above. A high absolute number of occurrences under these perspectives tends to represent higher costs associated with the image of product and organization, i.e. to maintain the support structure for the warranty analysis, such as service engineers and technicians for
126
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
analyzing the parts, storage, receiving, shipping, transport, laboratory, machinery and other field costs associated. Fixing manufacturing or assembly faults, which are recognized as issues caused by the company, should be prioritized, since they are the most costly to the organization. In the case of manufacturing or assembly faults recognized by automotive companies, various costs associated with low quality are transferred, partially or entirely, to the supplier of the specific part that caused the failure. These costs comprise campaign costs, transporting parts to a laboratory, towing a vehicle to a workshop, conducting analysis workshops, dealer or workshop human resources, financial losses caused by inactivity of a commercial vehicle, among others. For example, failures in the injection system of a vehicle may lead to consequential damage to the engine or vehicle costs. This means that if an injector leaks fuel due to a manufacturing fault of the product, and hypothetically in an extreme case, causes the explosion of an engine or vehicle, the liability is assigned to the vehicle manufacturer, who transfers part of this responsibility and costs to the supplier, whose part caused the problem. In this sense, although the supplier only manufactures the injector, it may have to respond to warranty costs as high as three hundred times the value of the part, besides compensation claims. 2.3.1. Block 1- Begin of Method The performance of the Injector X platform, already in use in the Chinese market, was assessed in order to verify if the design requirements defined within the organization were suitable for this market. Thus, potential opportunities for product improvement could be identified for avoiding failures that could occur under specific operating conditions in China, and ultimately implementing the changes in the new Injector Y platform, currently under development. 2.3.2. Block 2- Analyze field performance of product through warranty data In this example, the occurrences of complaints from the field in 2011 were searched in the warranty database. The year 2011 was chosen to account for the time span of one and a half so that the parts supplied are assembled and the queries provide a representative number of incidents. It is of common sense within the organization, that timeframes of less than one year show no significant occurrences of faults that could be relevant for the study. The resulting data was stratified and analyzed using graphs from different perspectives. 2.3.3. Block 3 - Identify opportunities for product improvement Block 3.1(P1) - Identify and fix deviations related to company process The first aspect to be considered refers to the rate of warranty decisions, so called Perspective 1 (P1). In Activity 1 related to Perspective 1,that was evaluated through a Pie Chart, it was presented the warranty decision rate. The failure rate recognized in warranty is virtually zero, totaling nine occurrences. Meanwhile, failures caused by customers due to misuse of the product represent 78% of the total occurrences, while claims where the product did not show any failure represent 22%. In Activity 2, the analysis of top failures recognized in warranty was performed, as shown in Figure 2. Only occurrences of failures of components in the magnet group
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
127
qty of claimed parts
and valve set were observed. These are mounted in fuel injectors of platform X. The eight occurrences of failures accounted to the magnet group were related to faulty welding connection within this component. And the only failure accounted to the valve set is an obstruction caused by metallic particles found inside a hole. In Activity 3, represented by Figure 3, the verification of failure rates is performed with respect to the manufacturing date of the product, by evaluating concentration spots of failure or possible trends to increase field problems over time. It was observed that both the occurrence of failures on the valve set and on the magnetic group were concentrated in a single month, indicating a specific problem of manufacturing or assembly. 10 8 6 4 2 0
magnet group
valve set failed components
welding joint - connection (welding, pressing) - defective weld
pressure bore - contamination - particle
Figure 2 – P1, Activity 2: Top failed components
qty of claimed parts
The last activity in this analytical perspective, represented by Figure 4, tried to determine the moment in useful life of fuel injectors when failures were occurring. For the prediction of failures, it is supposed that a commercial vehicle runs about 100,000 kilometers per year. Considering that the failures occur within 30,000 kilometers, it was assumed that new occurrences of these failures were no longer expected after four months from the vehicle date of sale. 10 5 0 sep
nov
2009
2010
manufacturing date welding joint - connection (welding, pressing) - defective weld pressure bore - contamination - particle
Figure 3 – P1, Activity 3: most failed components per manufacturing date
With the analysis results regarding warranty decision, it was concluded that only two failures were recorded: (i) magnet group with defective weld; and (ii) valve pressure hole blocked by particle. The two types of failure have been corrected and new occurrences of field failures related to these cases are not expected. Block 3.2 (P2) - Identify main failures caused by customer process The first activity of this perspective, represented by the chart in Figure 5, lists the five most frequently failed components and corresponding failures types. The valve set and the nozzle had most of the failure occurrences identified. It was also observed that the failure type was mostly caused by wear on these two components. The analyses performed in the following activities and perspectives used the fuel injectors of X platform that showed wear-related failures on the valve set and nozzle as a filter. After determining which components and related faults were to be inspected, the analysis of failure rates by manufacturing date was performed, represented by Figure 6.
128
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
20001-30000
10 8 6 4 2 0
1-10000
qty of claimed parts
No specific concentration of failures was observed, but rather frequent occurrences of wear on both analyzed components.
failure mileage (km) welding joint - connection (welding, pressing) pressure bore - contamination
Qty of claimed parts
Figure 4 – P1, Activity 4: histogram of most failed component mileage 4500 4000 3500 3000 2500 2000 1500 1000 500 0 valve set function - stuck
injector nozzle mechanical - broken
injector body contamination - particle
grupo induzido magnet group worn out/friction/stress - worn out
Figure 5 – P2, Activity 1: Pareto of top 5 components (top4 failure types)
400 300 200 100 0 jan feb mar apr may jun jul aug sep oct nov dec jan mar apr may jun jul aug sep oct nov dec jan feb mar apr may jun jul aug sep oct nov dec jan feb mar jun
Qty of claimed parts
The outcome of the last activity of this analytical perspective is represented in Figure 7. The task tried to evaluate the moment in the useful life of fuel injectors when the failures caused by customers were occurring. Problems related to wear in components valve set and nozzle were not expected at low to medium mileages, under 70,000 kilometers, as presented in Figure 7. The wear problem on those components could be interpreted as critical and may ultimately have compromised the image of both the product and the company, since customers more easily notice failures at low mileages.
2008
valve set - worn out/friction/stress
2009 2010 manufacturing date injector nozzle - worn out/friction/stress
2011
Figure 6 - P2, Activity 2: amount of failures per manufacturing date
An analysis related to the amount of failures (y-axis) by failure date (x-axis) was carried out, considering the top2 failures. No specific concentration of failures was observed, but rather frequent occurrences of wear on both analyzed components. This meant that the product had no significant failure concentration at certain times of the year. Block 3.3 (P3) – Identify concentration of failures on customers In the first activity of this analysis perspective, the amount of failures regarding applications or vehicle types are evaluated. With the analysis of this chart, customer “FGS” was identified as the one that had the most occurrences of complaints related to wear on components valve set and nozzle. The next activity from this perspective, possible concentrations of failure occurrences by manufacturing date of the product fuel injector are investigated. The
129
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
valve set - worn out/friction/stress
550001-560000
220001-230000
210001-220000
180001-190000
170001-180000
160001-170000
150001-160000
140001-150000
130001-140000
120001-130000
110001-120000
90001-100000
100001-110000
80001-90000
70001-80000
60001-70000
50001-60000
40001-50000
30001-40000
20001-30000
10001-20000
2000 1500 1000 500 0 1-10000
Qty of claimed parts
most failed applications from customer “FGS” related to wear both on the valve set and the nozzle are evaluated. No relevant occurrence of failure was observed, which could indicate that the problem was caused at the customer’s assembly line over a given period.
failure mileage (km) injector nozzle - worn out/friction/stress
Figure 7 - P2, Activity 4: histogram of top2 most failed component mileage (top2 failure types)
Block 3.4 (P4) - Identify concentration of failure on manufacturing plants A Pareto was performed to evaluate the first activity in this perspective. A careful analysis of this chart associates the plant manufacturer in China to most of the complaints about wear on components valve set and nozzle. This is endorsed by the fact that most of the part numbers manufactured are produced in China plant. In the next activity from the analysis perspective, possible concentrations of failures occurrences by manufacturing date were investigated for wear on valve set and nozzle, considering the top2 manufacturing plants. As a result, no significant concentration of failure occurrences was observed, which would have indicated a problem caused by a single plant or period of production in this plant. Block 3.5 (P5) - Identify concentration of failure on part number The first activity in the perspective of failure concentration analysis is a Pareto showing the amount of failures per injector type number. Injectors 078 and 170 were identified as the part numbers that are associated with the most complaints about wear on components valve set and nozzle. The high number of complaints related to these two part numbers were explained by the proportional delivery volume of these products. Those part numbers are of great delivery volume to the Chinese market and consequently a greater amount of those claimed part numbers are registered. The next activity in the failure concentration analysis perspective, investigated possible concentrations of failure occurrences by manufacturing date of the fuel injector, which showed wear on the valve set and nozzle, considering the top2 part numbers. No significant concentration was observed, it was concluded that high concentrations of complaints in these two part numbers were derived from the proportional volume of sales. Block 3.6 (P6) - Identify concentration of failure on geographic region On the outcome of the first activity in this perspective, China was identified as the market that has the most claims relating to wear on components valve set and nozzle. This is explained by the fact that the local plant manufactured the part numbers mostly used in China. Then, it was performed the analysis plotting the amount of failures by manufacturing date, considering the top2 countries where the failure occurred. No
130
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
significant concentration was found, which could indicate that the problem was being caused at one of these countries. The largest amount of claimed parts is explained by the fact that the local plant produces most of the part numbers used in China. 2.3.4. Block 4 - Define/implement necessary engineering changes (and discussion) As a result of applying the proposed method, it was observed that more than 14,000 platform X injectors were claimed in China in 2011. Within these occurrences, there were no recurring faults recognized under warranty, which needed to be corrected. Thus, engineering resources could be used to identify opportunities for product improvement, for preventing the occurrence of these failures, which were caused by misuse of the product and represent 78% of the total amount of failures. When analyzing the failures caused by customers (P2), the product proved to be vulnerable to problems related to wear on components injector nozzle and valve set. Other analyses (i.e. perspectives) did not indicate concentrations of failures caused by customers (P3), manufacturing plants (P4) and specific part numbers (P5), which means that the product has a general weakness in the Chinese market or needs further review of design requirements. In this hypothetical example, each warranty analysis costs in average US$400.00; 5,717 fuel injectors returned due to warranty problems with wear on components valve set and nozzle in the year 2011; that would represent yearly expenses of over US$ 2.2 million. Thus, through the application of the proposed method, an opportunity to improve components injector nozzle and valve set with respect to its wear resistance in the Chinese market was identified. 3. Concluding Remarks It is vital for a company to innovate and improve its products, but it must be considered the wise use of engineering resources and other costs associated with Engineering Changes, especially to poorly planned ones. The method for identifying opportunities of product improvement described on this paper can be used to refine the Engineering Change process, by proposing changes actually perceived and desired by the customers, assisting in planning, prioritizing critical issues and reducing associated costs. The present research presents the value of gathering data for evaluation of technical aspects related to the reliability and quality of the product in field, in order to assist in decision making within the Engineering Change Management process. The availability of a warranty database and knowledge of its structure provide the means for performing a wide variety of analyses and stratification of these data, which are important sources of information for decision making within an organization. The end result was the development of the proposed method. The establishment of a logical sequence to identify opportunities for product improvement by evaluating and detecting patterns of failures under six different perspectives, based on the stratification of data from product warranty records in field was proposed, as highlighted in Figure 1: (i) origin of complaints; (ii) main problems caused by customers; (iii) effects of processes and applications to customers; (iv) effects of product manufacturing; (v) part number investigation; and (vi) geographical location of fault and application environment . To demonstrate the applicability of the proposed method in this paper during the product development process, an application example within a pre-established scenario
M.R. Bueno and M. Borsato / A Method for Identifying Product Improvement Opportunities
131
was shown. Through the example, it was concluded that the proposed method can help identify opportunities for product improvement. From the scientific point of view, this work contributes to improve decisionmaking methods for Engineering Changes, providing a tool to support evaluation of technical and economic aspects, and stimulating the study in a new understanding regarding product redesign, enhancing the reliability and product quality perceived by the customer. The method of identifying opportunities for product improvement through data from warranty records offered through this work must be applied through other real cases of analysis, in order to optimize its use and validate the results. References [1] C. Terwiesch and C.H. Loch, Managing the process of engineering change orders: the case of the climate control system in automobile development, Journal of Product Innovation Management 16 (1999), 160-172. [2] A. Wasmer, G. Staub, and R.W. Vroom, An industry approach to shared, crossorganisational engineering change handling-The road towards standards for product data processing, Computer-Aided Design 43 (2011), 533-545. [3] J. Lawless, Statistical Analysis of Product Warranty Data*, International Statistical Review 66 (1998), 41-60. [4] H. Wu and W.Q. Meeker, Early detection of reliability problems using information from warranty databases, Technometrics 44 (2002), 120-133. [5] A. Hussain and D. Murthy, Warranty and optimal reliability improvement through product development, Mathematical and Computer Modelling 38 (2003), 1211-1217. [6] J. Lawless, J. Kalbfleisch, and S. Blumenthal, Some issues in the collection and analysis of field reliability data, in: Survival analysis: State of the art, Springer, 1992, pp. 141-152. [7] I. Wright, A review of research into engineering change management: implications for product design, Design Studies 18 (1997), 33-42. [8] V. Kocar and A. Akgunduz, ADVICE: a virtual environment for engineering change management, Computers in Industry 61 (2010), 15-28. [9] K. Rouibah and K.R. Caskey, Change management in concurrent engineering from a parameter perspective, Computers in Industry 50 (2003), 15-34.
132
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-132
A Closed-loop PLM Model for Lifecycle Management of Complex Product GUO Wei a,b, ZHENG Qing a, ZUO Bin b,1 and SHAO Hong-yu a Tianjin Key Laboratory of Equipment Design and Manufacturing Technology, Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin 30072 b Management and Economic School, Tianjin University, Tianjin 30072
a
Abstract. Complex Product has the characteristics of variety, long life cycle, difficult to verify design, strong target-oriented and dynamic, complex operating environment and status and so on. The existing PLM 㸦 Product Lifecycle Management 㸧 system can hardly effectively meet the demand of product designing, manufacturing services and MRO 㸦 Maintenance 㸪 Repair and Overhaul/ Operation 㸧 . The complex product lifecycle management model is therefore put forward. In this model, the maintenance closed-loop, operation closed-loop and design closed-loop combine to form an intelligent triple closedloop PLM. Based on the data collected through a variety of sensors and information systems from different stages of the product life cycle, the model can be used to research the flow of data and feedback methods across the stages and organizations. By combing the data flow, the data feedback points is supposed to be found, which enable the product life cycle data being used circularly. The result is of important guiding significance to lifecycle management of complex product. Keywords. Closed-loop PLM; Management Mode; Data Flow;
Introduction The development of complex products needs to realize the lifecycle data feedback, supporting the optimization design and innovative design, and providing data support for product MRO. The data related to complex product change constantly at all stages of the life cycle, especially the running process and operation environment [1]. In order to manage the lifecycle data of complex product effectively, the data in the late lifecycle stages is used to be fed back to the early lifecycle stages, guiding complex product design, processing services and MRO intelligent service, and a closed-loop PLM system model of complex product is required to be established to implement the feedback using of product data. In this closed-loop, all actors of the whole lifecycle can use, manage, and control product related data, including the information after a product delivery to the customer and until its final destiny. With this closed-loop, the data goes back to designers and production engineers, and the data flow can be closed over whole product lifecycle. Closed-loop PLM was first proposed in the EU's Product lifecycle Management and Information tracking using Smart Embedded Systems (PROMISE) project in 2004 1
Corresponding Author.
W. Guo et al. / A Closed-Loop PLM Model for Lifecycle Management of Complex Product
133
[2]. It can achieve effective management of PLM activities by using product data that could compensate PLM, and dynamically optimizing each stage of the product life cycle [3]. Jun studied the principles of data exchange between the different stages of the product life cycle, proposed a framework descripted by the resource, and created process flow and logistics closed-loop models [2]. Kiritsis put forward the closed-loop management framework based on communications technology and smart products [4]. Matsokis established a body-based semantic object model that supports closed-loop PLM product data and knowledge management [5]. Combining business operations, Rostad studied the impact and benefits that the closed-loop PLM system does to the business model [6]. Georgiadis and Jun studied the place in which the closed-loop PLM is used, such as predictive maintenance and re-manufacturing [7]. In China the research is focused on phased model and application, such as design changes closed-loop[8], quality control closed-loop [9], recycling manufacturing closed-loop [10], production management closed-loop [11]. Professor Wang Xu proposed: Closed-loop PLM is an effective strategy managing product lifecycle activities information, by obtaining the data associated with the product lifecycle, and integrating, transforming and sharing these data, to manage the information flow across the organization intra the lifetime [12]. Until now, the study for the closed-loop PLM system focuses on the review and phased closed-loop system, and a large closed-loop model from product MRO to product design has not been established yet. Combined with complex products demand, PLM development status and trends, this article proposes a complex product triple closed-loop PLM system model, to manage the process of the product total life detailed and intelligently.
1. Complex Product triple closed-loop PLM Model The complex product closed-loop PLM system model has three subcycles, shown in figure 1. It Integrates product lifecycle data, especially the active generating and automatic sensing data during the use phase, and manages these data classifiedly and uses them targeted. With effective data feedback mechanism, the data could flow among every stage and organization during the product lifecycle conveniently and correctly. Based on operation intelligent closed-loop, product data is counterproductive in operation, according to the processing environment and processing status, finishing self-adjustment and adaptation. Based on the maintenance intelligent closed-loop, product data is counterproductive in product maintenance, to analyzing product health, diagnosing troubles, and forming knowledge-based MRO service system. Based on design closed-loop, product data is fed back to product design, supporting product innovation and optimization, serving for product development better.
134
W. Guo et al. / A Closed-Loop PLM Model for Lifecycle Management of Complex Product
Figure 1. Complex Product triple Closed-loop PLM Model
x
Complex product operation intelligent closed-loop model.
In the operation intelligent closed-loop, the product status data reacts on the running process. Depending on the operating environment and the state, the product could realize self-adjustment and adaptation. Thus, it could solve the problem of operation conditions being complex, the environment impacting on product performance greatly, the product being lack of self-adaption and self-tuning. Finally, improve the product operation stability. x Complex product maintenance intelligent closed-loop model. In the maintenance intelligent closed-loop, we form the knowledge-based MRO system with troubleshooting knowledge and experience and operating data, to analyze product health, diagnose troubles, provide product maintenance and repair plan. Thus, it could solve the problem of debugging difficulty, high reliability requirements, fault diagnosis being difficult, manual intervention being poor, and low failure warning capability. Finally, improve the product operation reliability and extend product’s life. x Complex product design closed-loop. With the problem of paying little attention to operation, maintenance, repair and failure analysis when design, being lack of validation and optimization improvements of the structure of the original design, being difficult to meet single sets of custom design, high precision and high reliability requirements; being lack of process design and guidance during product use phase, we build the design closed-loop based on MRO data feedback. In this cycle, product data is counterproductive in product design, supporting product optimization and innovation.
W. Guo et al. / A Closed-Loop PLM Model for Lifecycle Management of Complex Product
135
2. Data flow of closed-loop feedback 2.1. Life Cycle Data In order to manage and store the data conveniently, we divide product lifetime data into 4 different types: design and manufacture data, operation status data, environment status data and failure data. Design and manufacture data includes product design and manufacturing process data. The design data constitutes the product original theoretical model, and the manufacturing data constitutes the actual model. Design and manufacturing data is provided by the companies designing and manufacturing sectors directly. Operation status data contains all data during the operation, such as operating strength, running time, frequency, parts wear and so on. Operating-state data is collected by the sensors embedded in the product and stored and analyzed in the electronic control unit (ECU) in the product. Environment status data is composed of the environmental status of the product, the condition data, operating temperature, humidity, voltage, current, and other properties of the object. Environment data is collected by the sensors embedded in the product collected and stored and analyzed in the ECU in the product. Failure data is the data generated when product repair and maintenance, including the failure model, maintenance program, maintenance procedures, repair results, etc., recording every repair and maintenance infor-mation. Failure data is accumulated through the same or similar products, and expands with the products increasing and the product life cycle extending. 2.2. Data Flow Model x
Operation intelligent closed-loop data flow
ECU system module, with the function of detection is embedded in products. It acquires operational status data, extracts feature data, including information on the environment, operating information, life information of key components in real-time. After integrating and analyzing these data, it identifies the machine current work status, and early warns for non-security state. At the same time, it takes measures to deal with unexpected exception by intelligently controlling machine state of start-stop and switching operation mode. Data flow is between environment status data and operation status data, shown in figure 2.
136
W. Guo et al. / A Closed-Loop PLM Model for Lifecycle Management of Complex Product
Figure 2. Data Flow of Operation Intelligent Closed-loop
ECU module monitors the operating environment of the product and obtains environmental condition data through a variety of sensors including temperature sensors, humidity sensors, vibration sensors etc. Meanwhile, it gets the operating state data, including operating strength, running time, vibration frequency and so on. ECU module takes a comprehensive analysis of the data obtained. With the result, it can determine the current status of the product, test products operation condition, make adjustments, issue accordingly instructions, and give safety warnings, automatic control, environmental improvement and servo compensation. x
Maintenance intelligent closed-loop data flow
Complex product maintenance closed-loop consists of health analysis, fault diagnosis and MRO management system. Data flow forms as a closed loop among environment status data, operation status data and failure data, shown in figure 3.
W. Guo et al. / A Closed-Loop PLM Model for Lifecycle Management of Complex Product
)DXOW0RGHO'DWD
6WDWXV$QDO\VLV 'DWD
)HHGEDFNǃ&RQWLQXRXV 2SWLPL]DWLRQDQG ,PSURYHPHQW
+LVWRU\ 0DLQWHQDQFH ,QIRUPDWLRQ
+HDOWK$QDO\VLV )DLOXUH3UHGLFWLRQ 6HOI'LDJQRVLV 5HPRWH$VVLVWDQFH
&RQGLWLRQ0RQLWRULQJ $QRPDO\'HWHFWLRQ
3URGXFWV5XQ 'DWDǃ (QYLURQPHQWDO 'DWD
137
0DLQWHQDQFH 3ODQ'DWD 0DLQWHQDQFH 3URFHVV'DWD 6WDQGE\5HGXQGDQF\ 0DLQWHQDQFH3HUVRQQHO
0DLQWHQDQFH&RVWV 0DLQWHQDQFH7LPH 0DLQWHQDQFH5HVXOW &XVWRPHU6DWLVIDFWLRQ
3ODQ ,PSOHPHQWDWLRQ
0DLQWHQDQFH 5HVXOWLQJ'DWD
&RUUHFWLYHPDLQWHQDQFH %UHDNGRZQ0DLQWHQDQFH 3UHYHQWLYH0DLQWHQDQFH 2YHUKDXO
0DLQWHQDQFH3ODQ 3UHYHQWLYH 0DLQWDQLHQFH3ODQV %UHDNGRZQ0DLQWHQDQFH 3ODQV
Figure 3. Data Flow of Maintenance Intelligent Closed-loop
Combined with historical status, operating states and environmental factors of complex products, product status characteristics is extracted to analyze the state, evaluate the current health state, and diagnose fault. After locating the fault sources and fault location, these data is used to match failure maintenance strategy, and then maintenance mode is determined. Later, the fault is repaired in accordance with a maintenance program. In the repair process, spare parts, maintenance personnel and other data is provided to support it. After the repair, the data generated during the maintenance is analyzed to evaluate the process and the results, including maintenance costs, time, customer satisfaction, etc. Meanwhile, data gets feedback to optimize and improve repair and maintenance continuously. At last, product failure data get feedback on products running, continue enriching the contents of failure data, and provide data to support future maintenance. x
(3) Design closed-loop data flow
Because complex product has complex structure and high reliability, potential failure mode and effects analysis (FMEA) should be considered in the design phase. Traditional d-FMEA only considers the feed forward process from the design stage to the using stage [13], it is difficult to verify the correctness of the original design, and it can hardly achieve a comprehensive optimization results. Through design closed-loop based on MRO data feedback, it is possible to offer engineering design services for key processes of complex parts. In this cycle, the design is optimized from both product design and engineering aspects. Data flow is among the design and manufacture data, operation status data, and failure data, shown in figure 4.
138
W. Guo et al. / A Closed-Loop PLM Model for Lifecycle Management of Complex Product
5DLVLQJWKH 5HOLDELOLW\
,PSURYHG 'HVLJQ)ODZV
0DNHXSWKH /LIH'HIHFW
3URORQJWKH/LIH
,PSURYHWKH XVHU ([SHULHQFH
3URGXFW'HVLJQ
6HWXS 3URFHVV 5HSRVLWRU\
2SWLPL]HWKH 3URFHVVLQJ 2EMHFW0RGHO
(QJLQHHULQJ'HVLJQ
(IIHFW
(IIHFW
)HHGEDFN )DLOXUH'DWD )DLOXUH0RGHǃ)DXOW /RFDWLRQǃ)DLOXUH )UHTXHQF\ǃ)DLOXUH &DXVHǃ0DLQWHQDQFH 0RGH
')0($%DVHGRQ)DLOXUH 'DWD)HHGEDFN
3URYLGH 7HFKQRORJ\ 6ROXWLRQ
2SHUDWLRQ'DWD 0DQXIDFWXULQJ3URFHVV (OHPHQWV˖0DWHULDOǃ
+XPLGLW\ǃ 7HPSHUDWXUHǃ&XWWHUǃ &XWWLQJ3DUDPHWHUǃ 4XDOLW\
(QJLQHHULQJ'HVLJQ %DVHGRQ2SHUDWLRQ 'DWD
Figure 4. Data Flow of Design Closed-loop
By failure data analysis, failure parts and reasons are found. In the next time the product being designed, new methods and techniques are used to improve design flaws, make up life and enhance the user experience. Product performance is largely depended on the operating environment and the objects. Running mode is also different because of different environmental and objects. Operation data is analyzed, product optimization operation model is built and product processing library is formed. This could offer different technology solutions for different processes running and objects.
3. Conclusion Complex products closed-loop PLM system can manage the product data, and make full use of the lifecycle data to reflect the operation status, provide repair and maintenance services, get feedback to product design. It will be increasingly applied by firms. In this paper, we present a closed-loop PLM system for complex product. The closed-loop PLM system includes three cycles: operation intelligent closed-loop, maintenance intelligent closed-loop and design closed-loop, which cover the total lifecycle. And finally, we elaborate the circulation mode and circulation pathways across stages and across organizations of each closed-loop data. In fact, it is a new model to manage and use product lifecycle data efficiently.
W. Guo et al. / A Closed-Loop PLM Model for Lifecycle Management of Complex Product
139
Acknowledgments This work was undertaken as part of a sponsored research program under the National High Technology Research and Development Program of China (Grant No. 2013AA040605), and supported by National Science and Technology Supporting Program (Grant No. 2012BAF12B05).
References [1] Research on Major Technical Equipment Manufacturing Industry Development, China's Productivity Development Research Report on 2007-2008, 2009. [2] Jun H B, Kiritsis D, Xirouchakis P. Research issues on closed-loop PLM. Computers in Industry, 2007, 58(8): 855-868. [3] Jun H B, Shin J H, Kiritsis D, et al. System architecture for closed-loop PLM. International Journal of Computer Integrated Manufacturing, 2007, 20(7): 684-698. [4] Kiritsis D, Bufardi A, Xirouchakis P. Research issues on product lifecycle management and information tracking using smart embedded systems. Advanced Engineering Informatics, 2003, 17(3): 189-202. [5] Matsokis A, Kiritsis D. An ontology-based approach for Product Lifecycle Management. Computers in in-dustry, 2010, 61(8): 787-797. [6] Røstad C C, Myklebust O, Moseng B. Closing the product lifecycle information loops//18th International Conference on Production Research, Fisciamo, Italy. 2005. [7] Georgiadis P, Athanasiou E. The impact of two-product joint lifecycles on capacity planning of remanufacturing networks. European Journal of Operational Research, 2010, 202(2): 420-433. [8] Wang Xiaocui. Engineering Change Process Control Based on PLM. Jinan: Shandong University, 2009. [9] Liu Jingjun, Sun Quan, Zhou Jinglun. Research of Product Quality Close Loop Management System. Microcomputer Information, 2006, 22(18): 140-140. [10] Ma Demin. Research on Compatibility Mechanism of Manufacturing/Remanufacturing Closed-loop Supply Chain System. Information Technology & Standardization, 2008, 3: 42-46. [11] Wang Ming. Design and Implementation of Production Management System Based on Universal Software Development Platform. Xi’an: Xidian University, 2007. [12] Wang Xu, Li Wenchuan. New Concept for Manufacturing Industry——Closed-loop Product Lifecycle Man-agement. China Mechanical Engineering, 2010 (14): 1687-1693. [13] Chang Kuei-Hu, Wen Ta-Chun. A novel efficient approach for DFMEA combining 2-tuple and the OWA operator. Expert Systems with Applications, 2010, 37(3): 2362-2370.
This page intentionally left blank
Part III Knowledge-Based Engineering
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-143
143
A Knowledge-Based Approach for Facilitating Design of Curved Shell Plates’ Manufacturing Plans Jingyu SUNa,1 Kazuo HIEKATA a Hiroyuki YAMATO a Norito NAKAGAKI b Akiyoshi SUGAWARA b a Graduate School of Frontier Sciences, The University of Tokyo, Japan b Sumitomo Heavy Industries Marine & Engineering Co., Ltd, Japan
Abstract. The approach described in this paper provides a framework for capturing the knowledge during the curved shell plate’s manufacturing, in which the knowledge is articulated in Nested Ripple Down Rules tree format. An interactive computer system has been designed that incorporates a four faceted knowledge-based framework including: (1) Virtual template system provided in the prior work (2) Raw Knowledge Interrelated Database (3) Knowledge Base and (4) Knowledge Elicitation and Dissemination. The framework provides beginning workers with the capability to quickly design a manufacturing plan about all aspects of the design process as the expert workers. The system which integrates the proposed approaches above is evaluated by conducting a series of experiments in the shipyard. The knowledge such as how to manufacturing the curved shell plate considering multiple parameters of the plate in one time is elicited by evaluating a couple of parameters’ combination.. Keywords. tacit knowledge, curved shell plate, manufacturing plan, virtual template
Introduction A major problem in the curved shell plates’ manufacturing is that the design of the manufacturing plan heavily depends on the implicit knowledge, habits and experience of the workers. Manufacturing plans varies according to the knowledge and skills of the craftsman in charge of the plate even for the same design shape. In other words, the current practice highly depends on the individuals and there is a risk to lose the knowledge and skills at the time of their retirement. Also to avoid inaccurate distorted output shapes and troubles in the subsequent heat sealing process, the tacit knowledge and skills for bending plates must be elicited, shared and reused in daily operation. This paper proposes a knowledge-based approach to capture the knowledge and skills for the curved shell plate’s manufacturing process. The knowledge is articulated in NRDR (Nested Ripple Down Rules) [1] tree format and a computer system is developed for facilitating the captured knowledge for reuse. The whole system is on the 1
Student, Graduate School of Frontier Sciences, the University of Tokyo, Building of Environmental Studies, Room #274, 5-1-5, Kashiwanoha, Kashiwa-city, Chiba 277-8563, Japan; Tel: +81 (4) 7136 4626; Fax: +81 (4) 7136 4626; Email:
[email protected] ; http://www.nakl.t.u-tokyo.ac.jp/
144
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
basis of a software system VTS (Virtual Template System) introduced in the prior work which can virtualize the manufacturing process and automatically suggest manufacturing plans. Because the workers always design some different manufacturing plans under some specific situations rather than just considering the curvature differences between the design data and the measured shape which the software mentioned above uses, an interactive system has been designed that incorporates a four faceted knowledge-based framework to provide beginning workers with the capability to quickly design a manufacturing plan about all aspects of the design process as the expert workers. These facets are as follows: (1) VTS (Virtual Template System) introduced in the prior work generate and record the information from the virtualized manufacturing process including the curved shell plates’ curvature information, torsion information, and the system’s suggested manufacturing plan. (2) Raw Knowledge Interrelated Database executes the process of capturing information. In this process, expert analyzes the information and screen out that can lead better insights into the expert workers’ opinion. (3) Knowledge Base software system implements the Nested Ripple-Down Rules to deal with the issues which lead to the special manufacturing plan design (not the system’s suggested ones). The knowledge are stored and virtualized as rules in this framework. (4) Knowledge Elicitation analyzes the correlations between the parameters describing the curved shell plate’s situations and the relevant manufacturing plans designed by the expert workers, and then translates them into the rules; Knowledge Dissemination provides the relevant instructional rule from the KB based on the automated measurement of the curved shell plate’s situation. The system which integrates the proposed approaches above is evaluated by conducting a series of experiments in the shipyard. The knowledge such as how to manufacturing the curved shell plate considering multiple parameters of the plate in one time is elicited by evaluating a couple of parameters’ combination. The tacit knowledge existent in the curved shell plate’s different situations during the manufacturing process is proved to be effectively elicited, represented and disseminated.
1. Related Work Hamade et al. proposed to use a knowledge acquisition (KA) approach based on Nested Ripple Down Rules (NRDR) to assist in mechanical design focusing on dimensional tolerancing. A knowledge approach to incrementally model expert design processes is implemented. The knowledge is acquired in the context of its use, which substantially supports the KA process. [1]. As a prior work of this study, an approach of automatically generating the manufacturing plan using virtual templates was discussed by Sun et al [2] . The workflow of the system VTS (Virtual Template System) is shown in Figure 1. Firstly, the curved shell plates are extracted from 3D measured data divided into many regions by obstacles. Then, the virtual templates are generated from ship’s design data. Finally, the manufacturing plans consisting of the heating areas for bending curved shell plates are automatically generated. This work intended to extract the existing knowledge as RDR’s (Ripple-Down Rules) rule set [3] during the manufacturing by analyzing the differences between the system’s generated manufacturing plans and the worker’s
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
145
actual used plans. But without an efficient knowledge based system, only single parameter can be analyzed once a time which cannot satisfy the practical situation in the manufacturing because mostly there are multiple parameters describing the curved shell plate’s situation used to design the next manufacturing plan.
Figure 1. System Overview of VTS (Virtual Template System) from Prior Work.[1]
2. Proposed Framework for Capturing Knowledge 2.1. Overview The overview of the framework which implemented the proposed approach is illustrated in Figure 2. The raw knowledge interrelated database captures information from the virtualized manufacturing process including the curved shell plates’ curvature information, torsion information, and the system’s suggested manufacturing plan from the system VTS from the prior study. The knowledge base system stores and virtualizes the knowledge. To condense the size of RDRs, Nested Ripple Down Rule Trees (NRDR) [4] which is a concept hierarchy using RDR is implemented. A rulebased knowledge which contains a set of validated rules classified by the curved shell plate’s situation is constructed. Besides, the rules are visualized to help the workers understand the knowledge. During the knowledge elicitation process, certain correlations between the parameters describing the curved shell plate’s situations and the relevant manufacturing plans designed by the expert workers are elicited and captured into a set of rules. Firstly, workers compare the two manufacturing plans: the system’s suggested plan and the plan they actually used. If these two plans are not completely the same, interviews are conducted to explain the differences by considering the plate’s parameters. Based on the result of the interview, new rules are added as the captured knowledge. During the knowledge dissemination process, depending on the curved shell plate’s situation, the framework can give manufacturing samples and diagnosis dealt before. Firstly, after the plate’s measuring of each manufacturing step, whether the VTS system’s suggested manufacturing plan can be used or not are evaluated by the worker.
146
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
If not, the following manufacturing decision should follow the rules’ tree from the constructed knowledge base until the most proper diagnosis is found.
㻌 Figure 2. System Overview.
2.2. Raw Knowledge Interrelated Database As shown in Figure 2, the Raw Knowledge Interrelated Database consists of two main kinds of data screened out from the VTS (Virtual Template System): (1) Manufacturing Parameters such as the angles between the virtual templates’ sticks and the distance between the virtual templates’ bottom lines and the curved shell plate; (2) Manufacturing Scene which is a screenshot of the virtualized template and curved shell plate. These data and scenes screenshot of manufacturing can ensure the following knowledge base construction and knowledge elicitation to be efficiently carried out no matter during or after the manufacturing. The information about the basic manufacturing manual without considering the curved shell plate’s situation is stored as the ontology database. And the manufacturing parameters and screenshots are given metadata attributes in each rule’s xml file and could be searched from the knowledge base interface. 2.3. Knowledge Base Software System As mentioned in the prior work, mostly there are multiple parameters describing the curved shell plate’s situation used to design the next manufacturing plan at a time. With the plain RDRs, the parameters can only be considered one by one which will clearly cause the data redundancy and make the knowledge base intricate to be used. Therefore, the knowledge base software system in this work chooses the NRDR (Nested Ripple Down Rule Trees) which can efficiently represent the knowledge existing in the manufacturing while having a relatively condensed reasonable tree size. The system
147
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
uses NRDR to define a conceptual hierarch. With the proper combination of the parameters’ values, the system ensure the expert can introduce his/her own vocabulary to express him/herself more naturally. As the example shown in Figure 3, the NRDR can condense the size of natural RDRs mentioned in the prior work as the same concept defined by a lower order RDR tree (C3) may be used multiple times in higher order trees (where the Rule 3.33 is). Also the diagnosis which represents the experts decision is executed after each rule is reached.
㻌 Figure 3. Nested Ripple Down Rule Trees
㻌 As shown in Figure 3, the data structure is similar to a decision tree. Each node has a rule, the format of this rule is "IF cond1 AND cond2 AND ... AND condN THEN conclusion". Cond1 is a condition (Boolean evaluation), for example "the max distance D from virtual templates’ bottom line and the curved shell template > 5mm",which could also be written in "isGreater(D,5)". Each node has exactly two output nodes; these output nodes are connected to another node or another decision tree such as C3 in Figure 3 by "TRUE" or "FALSE". As shown in Figure 4, physically the Knowledge Base in this system is stored as some xml-based rules’ sets; the overview of the rules relations are represent in the Rules Relation View as shown in Figure 5 (upper) while each rule’s details are read from the metadata file and displayed in the Rule’s Detail View as shown in Figure 5 (lower). Each rule’s details and the relations between rules are displayed, adjusted, modified or deleted in these visible views. Besides, by clicking the items in the Rules’ Tree View, the corresponding rule’s detail is displayed in the view to facilitating the knowledge obtaining during the curved shell plate’s manufacturing process.
㻌 Figure 4. The Xml-based Rule Sets
148
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
Figure 5. Rules Relation View and Rule’s Detail View㻌
㻌
2.4. Knowledge Elicitation and Dissemination Differences between the manufacturing plan suggested by system and the manufacturing plan actually used by workers exist. In order to extract the tacit knowledge during the heating process, interview investigation about these differences in different cases was conducted. The knowledge during the manufacturing process is elicitated and disseminated based on the database and system introduced in 3.2 and 3.3. During the knowledge elicitation, as shown in Figure 6, the human expert’s knowledge is acquired based on the current context and is added incrementally into a binary decision tree in the format of a set of independent rules (if .. elsif rules). in the NRDRs framework, the added rules could be a plain rule or contain the other rule’s reference. In other words, NRDR adds new rules containing or not containing other rule’s reference when the existing ones cannot satisfy the situation of current context. The knowledge elicitation process is as following. i. Firstly, Figure 6 (upper) is the original description of the knowledge we know before this knowledge elicitation step. The plate is measured and analyzed using VTS system and the knowledge related data (the parameters and manufacturing scenes) are stored in the database introduced in 3.2. ii. Then interviews are conducted when the VTS’s suggested plan cannot be used directly and the existing rule set cannot give a reasonable diagnosis based on the current plate’s situation. A new rule (Rule 10.101) is added into the tree. iii. As shown in this figure, some different rules (the Rule 7.77 9.91 and 8.88) are the same patterns which need to be considered and processed before the new Rule 10.101 is analyzed to be necessary.
149
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
iv.
A new lower level rules’ tree C3 is created and reused in both Rule 3.33 and Rule 10.101. The same process repeated in the following elicitation.
㻌 Figure 6. Knowledge Elicitation using Nested Ripple-Down Rules
During the knowledge dissemination, depending on the curved shell plate’s situation, the framework can give manufacturing samples (the right diagnosis) dealt before. The knowledge dissemination process is as following. i. Firstly, after the plate’s measuring of each manufacturing step, whether the VTS system’s suggested manufacturing plan can be used or not are evaluated by the worker. ii. If not, the worker start to search the right diagnosis following the rules’ tree constructed during the knowledge elicitation based on the plate’s parameters stored in the raw knowledge interrelated database. As show in the Figure above, the diagnosis 101 is reached when the judgments of Rule 1.11, Rule3.33, Rule 4.44 and Rule 6.61are TRUE, FALSE, FALSE and FALSE. iii. Before the diagnosis 101 is carried out, the child tree C3 which contains the pre-checking rules (the Rule 7.77 9.91 and 8.88) should be checked and executed first.
150
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
3. Experiment in Shipyard 3.1. Overview This experiment is the subsequent experiment of that from the prior work. By the manufacturing using the generated RDRs rule set from prior work, 85% points of the curved shell plate’s measured point cloud were within 5mm from the design shape; and the distance between each virtual template’s bottom line (the endpoint) and the curved shell plate is within 15mm. However, when the workers continued trying to minimize the error, new knowledge turn out which means new rules had to be added; within them, some duplicated and difficult rules existed and caused the inefficiency of both the knowledge base’s construction and the knowledge dissemination. The main objective of the experiment in this paper was to evaluate the proposed knowledge elicitation approach based on NRDRs that it can better capture the manufacturing knowledge during the knowledge elicitation and make the knowledge base easier to understand during knowledge dissemination. 3.2. Experiment Using Real Curved Shell Plate in Factory (A) Knowledge Elicitation Result The bending situation on both the horizontal frame and the vertical frame are measured, evaluated and stored as shown in Figure 7 (left and middle). As shown in the right of Figure 7, the 2 straight lines are the manufacturing areas suggested by the system, but as the result, neither the errors on the horizontal frame on the upper area nor the ones on the vertical frames in the middle of the plate reduced. The expert suggested doing the manufacturing along with the rounded areas shown in the middle of the plate after checked the plate’s vertical frame and horizontal frame. Result proved this theory. The idea that both the errors on the horizontal frames and the ones on the vertical frames should be considered together was taken into the knowledge base as the rules C6 and C101 in the Figure 8 (right).
Figure 7. The Situation and Manufacturing Area of the Curved Shell Plate
Also, before and after this manufacturing step, the errors’ region distribution and the errors’ radius should be taken into consideration which means the rules in the dashed lines in Figure 8 (left) should be executed. Therefore, these rules which is considered to be reused in the following knowledge elicitation are arranged into a new lower level RDRs tree C3, and be referred from the Rule3.33 and Rule 10.101.
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
151
Figure 8. Manufacturing processing flow with virtual template
The rules in Figure 8 (right) is illustrated as below. C3: Same as the rule set in dashed lines of Figure 8 (left) reused twice (rule 3.33 and 11.111). C1: Can the curved shell plate be processed by this system or not. C33: Is the errors’ region symmetrical relative to the view or not. C5: Can the heating line be arranged symmetrically to the torsion (physically). C4: Is here no multiple errors (insufficient/over relative to the design) existed both in the horizontal and vertical directions. C2: Can the adjacent frames’ points which have relatively big error be connected physically. C6: Can point/line heating arranged along with the single frame’s points which have relatively big curvature error cause extra errors belonging to other frames. C101: Would the point/line heating mentioned in C6 overcorrect the errors existing. C11: Is the point/line heating executable physically. (B) Knowledge Dissemination Result
In this experiment, another curved shell plate’s manufacturing step is analyzed using the constructed Knowledge Base in experiment A.
Figure 9. Manufacturing based on constructed Knowledge Base
As shown in Figure 9 (upper), the situation of the plate is similar as the one from experiment A. The bending on the horizontal frame is inefficient while the one on the vertical frame is over. The rule set constructed in experiment A is searched as below: C1: TRUE (Plate type can be processed) C3: TRUE (the errors’ region distribution and the errors’ radius checked) C33: FALSE (the errors’ region is not symmetrical)
152
J. Sun et al. / A Knowledge-Based Approach for Facilitating Design
C4: FALSE (errors of multiple types exist) C6: FALSE (point heating arranged along with single frame’s points causes no extra error) C3: TRUE (the errors’ region distribution and the errors’ radius are checked again) C101: FALSE (Point heating mentioned in C6 would not overcorrect the errors existing) C11: TRUR (Point heating is executable physically) Diagnosis: do point heating between the single frame’s points which has relatively big error
After this manufacturing step, as shown in Figure 9 (below), the areas of error (the red and blue areas) reduced. Also, with the virtualized environment, the details of the plate’s information is more clear and specific to help the worker make decision, and with the constructed knowledge base in experiment A, even a beginner worker can do make the correct manufacturing decision following the rules’ set introduction.
4. Conclusion and Future Work This paper proposed a knowledge-based approach to capture the knowledge and skills for the curved shell plate’s manufacturing process. The knowledge was articulated in NRDR (Nested Ripple Down Rules) tree format and a computer system was developed for facilitating the captured knowledge for reuse. The virtualized environment by the VTS system provided information about the plate efficiently. The framework proposed in this paper was evaluated by conducting a series of experiments in the shipyard. The knowledge such as how to manufacturing the curved shell plate considering multiple parameters of the plate in one time is elicited efficiently based on the level separated Nested Ripple Down Rules by evaluating a couple of parameters’ combination. Also the experiment showed that the knowledge base can help workers make decision. As future work, experiments about other types of curved shell plate will be carried out, and a complete knowledge database is to be constructed. Based on a complete knowledge database, the automatic manufacturing plan generation flow proposed in this paper can be expected to be optimized. And more efforts may be necessary for maintain the consistency of the rule base.
5. Acknowledgement This manuscript is an output of the Joint Study supported by NIPPON KAIJI KYOKAI (Class NK). The authors would like to thank UNICUS Co., Ltd., FARO Japan, Inc., for using their large point cloud processing system Pupulpit.
References [1] R.F. Hamadea, V.C. Moulianitisb, D. D’Addonnac, G. Beydound: A dimensional tolerancing knowledge management system using Nested Ripple Down Rules (NRDR), Engineering Applications of Artificial Intelligence, Volume 23, Issue 7, October 2010, 1140–1148. [2] J.Sun, K. Hiekata H. Yamato, N. Nakagaki, A. Sugawara, Virtualization and automation of curved shell plates manufacturing plan design process for knowledge elicitation, Int. J. Agile Systems and Management, Vol. 7, No. 3, (2014) in press. [3] B.R. Gaines, R. Compton: Induction of Ripple-Down Rules Applied to Modeling Large Databases, Journal of Intelligent Information Systems, 5, (1995) 211-228. [4] G. Beydoun, A. Hoffmann, NRDR for the acquisition of search knowledge, Advanced Topics in Artificial Intelligence Lecture Notes in Computer Science Volume 1342, (1997) 177-186.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-153
153
Using Patent Co-Citation Approach to Explore Blu-ray Technology Classifications a
Yu-Hui WANGa,1, Pin-Chen KUO band Tzu-Han CHOWb Department of Information and Finance Management, National Taipei University of Technology, Taiwan, R.O.C b Institute of Services and Technology Management, National Taipei University of Technology, Taiwan, R.O.C
Abstract. Blu-ray Disc (BD), the next generation in optical discs, offers clearer and sharper image, better sound quality than DVD. In 2013, more than 72 million households have BD compatible devices in U.S. This paper aims to explore objectively key technology fields and intellectual intelligence of this emerging BD technology for participants. The patent co-citation approach (PCA), a patent classification system that is adaptive to the characteristics of a specific industry, is applied to draw key technology fields of BD technology. The results show that BD patents can be classified into eight technological categories. Most patents are classified into the two factors - factor 1.1 recording medium, recording and reproducing process, and factor 1.2 information recording medium, defect management. The intelligence can benefit patent management to make technological forecasting, research planning, and technological positioning for BD technology. Keywords. Blu-ray Disc, patent co-citation approach (PCA), technology classification
Introduction With the fast-advancing technology, patents play the role of strengthening the competitive advantage of enterprises [1]. When dealing with large amount of patents, an efficient patent classification can further benefit patent analysis. The current studies on patent analysis use the International Patent Code (IPC), developed by the Wide Intellectual Property Office (WIPO), or the United States Patent Code (UPC), developed by the United States Patent and Trademark Office (USPTO), to identify patent classifications. However, both the IPC and the UPC systems are too general to satisfy the needs for a specific industry [2]. Thus, the patent similarity based patent classification system, patent co-citation approach (PCA), was proposed to benefit the understanding of the essential patents for a specific industry, and the relationships among clusters of technology. The result of PCA can offer explicit intelligence for patent management, technological forecasting, research planning, technological positioning and strategy making [3]. Optical storage technology has developed three generations: Compact Disc (CD), Digital Versatile Disc (DVD), and Blu-ray Disc (BD). The technology has been applied 1
Corresponding Author.
154
Y.-H. Wang et al. / Using Patent Co-Citation Approach
to the market for information storage, automotive electronics, and audio appliances and so on. With the rising need for audio entertainment quality, BD format offers an immense storage capacity (up to 50GB) that is perfect for High Definition video recording and distribution, as well as for storing large amounts of data. Including Dell Inc., Hewlett Packard Company and almost 100 founding members initiated the Bluray Disc Association (BDA) to promote BD format in 2005 and the market of BD has rapidly expanded. According to numbers compiled by the Digital Entertainment Group (DEG) with input from retail tracking sources, the number of Blu-ray homes continued to grow. Overall, consumer spending on digital content rose 17 percent in 2013. Bluray Disc consumer spending remained consistent, up about five percent for the year. The total household penetration of all Blu-ray compatible devices is more than 72 million in U.S. homes in 2013 [4]. However, when users use “CD, DVD, or Blu-ray” keywords to search for patent classifications of the IPC or the UPC, only general and rough classifications of optical storage technology are found. To sufficiently differentiate techniques among CD, DVD, and BD and catch up with BD technology classifications, this paper adopts the PCA to illustrate BD related patent classification system.
1. Patent Co-citation Approach (PCA) Information about patent citations offers patent bibliography analyzers the initial judgment to realize the context of technology development and to evaluate the importance of patents [5]. Patent citation documents the course of the accumulation of the technical knowledge, and makes connections among the related patents. These connections demonstrate the correlations between relevant patents [3]. Lai and Wu (2005) proposed the PCA, based on co-citation analysis of bibliometrics, to create a patent classification system. The conception and the application of the PCA are shown in Figure 1. For instance, Q1-Q6 are target patents, and P1-P4 are basic patents selected from target patents. According to the similarity measured by co-cited frequency, basic patents are classified into two groups, representing different technology categories. P1 and P2 are covered by F1 category, while P3 and P4 are assigned to F2 category. The target patent Q1 cites the basic patent P1, so Q1 belongs to the F1 category [3]. Q1
Q2
P1
Q3
P2
F1
Q4
Q5
P3
Q6
P4
F2
Target Patents (citing)
Basic Patents (cited) Technology Category (factor)
Figure 1. The conception and the application of the PCA [3]
The analysis of this approach is divided into three phases. Phase I selects appropriate databases to conduct patent searches. Phase II uses the co-cited frequency
Y.-H. Wang et al. / Using Patent Co-Citation Approach
155
of the basic patent pairs to assess their similarity. Phase III uses factor analysis to establish a classification system and assess the eƥciency of the proposed approach [3].
1.1. Phase 1 Searching and identifying Basic Patents According to the purpose of the research, researcher chooses proper database and search target patents and candidates of basic patents. Target patents are citing patents to be classified. Candidates of basic patents are those patents that are cited by target patents [3]. We denote Qi as target patent i and CPj as the candidate for basic patent j, respectively. The referential relationship between target patents Qi and the candidate for basic patents CPj is shown as the matrix where M is the amount of target patents, and N is the amount of candidates for basic patents. ൣȽ୧୨ ൧
ൈ
,ߙ ൌ ቄ
ͳܳ ݆ܿ݅ܲܥݏ݁ݐ Ͳ݁ݏ݅ݓݎ݄݁ݐ
(1)
The elder patent has more opportunity to be cited. Hence, besides considering the cited frequency, the time being cited should be considered as well. In order to eliminate the bias from patent age, and select basic patents, the equation is adjusted and shown as follows: ୨ ൌ σ ୧ୀଵ Ƚ୧୨ ൈ ୧ ǡͳ
(2)
Where ୧ is the weight of the target patent i, which is obtained by subtracting the standard year. Define the candidate CPj with୨
as basic patent [3]. The value of c is the threshold in selecting basic patents, and it will influence the comprehensiveness of the classification system and the complexity during the process of analyzing. The citation relationship between target patents Qi and basic patents CPj is shown as the where new matrixൣɂ୧୨ ൧ ୫ൈ୬
ߝ ൌ ൜
ͳܳ ܿ݅ܲݏ݁ݐ Ͳ݁ݏ݅ݓݎ݄݁ݐ
(3)
Where m is the amount of target patents which can be classified by basic patents , and n is the amount of basic patents [3]. 1.2. Phase 2 Evaluating the Similarities in Basic Patent pairs The PCA employs the Pearson correlation coefficient to access the similarity for basic patent pair. There are three steps to calculate the similarity of each basic patent pair: x Step1: Calculate the co-cited frequency of each basic patent pair Given patent j and j’, the co-cited frequency of this patent pair is ߱ᇱ ൌ ቊ
σ ୀଵ ߝ ߝ ᇲ ݂݆݅ ് ݆Ԣ Ͳ݂݆݅ ൌ ݆Ԣ
ͳ ݆ ݊ǡ ͳ ݆Ԣ ݊
(4)
156
Y.-H. Wang et al. / Using Patent Co-Citation Approach
x
A symmetrical matrix ൣɘ୨୨′ ൧ can be obtained after calculating all of the co୫ൈ୬ cited frequency of n basic patent pairs. Step2: Calculate the linkage strength of each basic patent pair The equation to calculate the linkage strength of each basic patent pair as follows: ఠೕೕᇲ
ߨᇱ ൌ ൝
x
ௌೕ ାௌೕᇲ ିఠೕೕᇲ
݂݆݅ ് ݆Ԣ
ͳ ݆ ݊ǡ ͳ ݆Ԣ ݊
(5)
Ͳ݂݆݅ ൌ ݆Ԣ
Where ୨ ൌ σ୫ ୧ୀଵ ɂ୧୨ represents the cited frequency of the basic patent Pj [3]. can be obtained after calculating all of the A symmetrical matrix ൣɎ୧୨ ൧ ୬ൈ୬ linkage strengths of n basic patent pairs. Step3: Calculate the Pearson correlation coefficient of each basic patent pair Calculate the Pearson correlation coefficient of each basic patent pair by the to obtain the matrix of Pearson correlation symmetrical matrix ൣɎ୧୨ ൧ ୬ൈ୬
coefficient ൣɀ୨୨′ ൧
୬ൈ୬
[3].
1.3. Phase 3 Creation of the Patent Classification System In this phase, the result of Pearson correlation coefficient of the basic patents is used to employ factor analysis to classify basic patents. After the factor analysis, the loading of the variables (patent) on the factor (technical category) indicates the degree of importance for the basic patent to the technical category, and it can help naming the technical categories as well [3]. Besides, the correlation coefficient between factors indicates the degree of correlation of technical categories. The performance of classification system can be evaluated by three indicators, cover index, weight cover index, and consistency index [6]. Three indicators are described as follows: x Indicator 1: cover index The definition of cover index is as follows:
ൌ
x
(6)
ெ
Where M is the amount of target patents, and m is the amount of patents which can be classified [6]. Indicator 2: weight cover index The definition of weight cover index is shown as follows: σ ௐ సభ ௐ
ൌ σసభ ಾ
x
(7)
Where ୧ is the frequency that patent i is cited by other target patents [6]. This indicator is the revised version of the cover index weighted by the importance of each patent, which is measured by the cited frequency. Indicator 3: consistency index The definition of weight cover index is shown as follows:
157
Y.-H. Wang et al. / Using Patent Co-Citation Approach
ൌ
ି௫
(8)
Where is the amount of target patents that are multiply classified [6]. All high value of the above three kinds of indicators indicate good performance of the patent classification system [6].
2. Empirical case - Blu-ray disc technology The PCA is a dynamic and self-constructing methodology to create a patent classification system, which can reflect the existing status of the technology [6]. Compared to static classification systems such as the IPC or the UPC, the system created by the PCA has a better ability to reflect the characteristics and existing status of anyone specific technology [6]. In order to offer explicit intelligence for BD patent management, this paper conducts the PCA to sufficiently illustrate BD emerging technology classifications. The processes are described as below. 2.1. Data Collection Utility patent abstracts, claims, and titles containing keywords and phrases of "blueray" 2 or "blu-ray" or "blu ray" were collected in USPTO database. 403 patents are collected and identified as target patents. These patents cite 4,212 patents totally, which were candidates for basic patents. We calculate ୨ by using Eq. (2) with standard year 1977, which is set to let each ୨ remain positive. The results are shown in Table 1. We use STj >= 137 as the criteria to select basic patents, and 192 basic patents are identified from the 4,212 candidates of basic patents, approx top 5% in the 4212 patents. 2.2. Evaluating the Similarities in Basic Patent pairs The matrix of co-cited frequency of each basic patent pair ൣɘ୨୨′ ൧
୫ൈ୬
and linkage
strength of each basic patent pair ൣɎ୧୨ ൧ were also constructed with Eqs. (4) and (5) ୬ൈ୬ respectively. Then we calculate the Pearson correlation coefficient of each basic patent to process pair, and we obtain the matrix of Pearson correlation coefficient ൣɀ୨୨′ ൧ ୬ൈ୬ next step. 2.3. Factor analysis The result of Pearson correlation coefficient of basic patents is employed to factor analysis. This study uses principal component analysis with the promax rotation to extract factors. Based on eigenvalue more than 1 criterion, 7 factors are obtained, which account for 99.88% of the variance. Eigenvalues and variances explained by factors are shown in Table 2.
2
In February 2002, the introduction of the Blu-ray Disc (BD) format was announced. However, the phase of “blue-ray” is still commonly used in patent applications before and after its official name.
158
Y.-H. Wang et al. / Using Patent Co-Citation Approach
Table 1. Weighted frequency of candidate for basic patents STj Frequency
Accumulated STj Frequency frequency
Accumulated STj frequency
Frequency
Accumulated frequency
376
1
1
187
3
44
132
7
319
333
1
2
185
13
57
131
1
320
327
1
3
177
1
58
127
1
321
318
1
4
175
33
91
125
1
322
314
1
5
174
1
92
123
1
323
270
1
6
166
32
124
122
3
326
252
1
7
165
2
126
121
7
333
229
1
8
164
1
127
120
1
334
222
7
15
163
3
130
119
1
335
220
5
20
162
3
133
118
12
347
215
1
21
161
11
144
117
1
348
214
2
23
159
12
156
116
1
349
213
1
24
155
2
158
115
1
350
208
1
25
152
2
160
114
5
355
204
1
26
150
1
161
112
1
356
202
2
28
149
1
162
111
1
357
201
3
31
147
1
163
110
3
360
200
1
32
146
1
164
109
1
361
195
1
33
145
17
181
105
7
368
193
2
35
142
1
182
101
84
452
192
1
36
141
1
183
100
1
453
190
1
37
140
2
185
1~99
3759
4212
189
1
38
137
7
192
188
3
41
133
120
312
Factor 1 includes 104 patents in which 48 patents show with negative value of loading and 56 patents show positive value. The great difference between positive and negative value of loading impairs naming factors. So the second-round factor analysis is conducted to separate 104 patents of factor 1, and then two sub-factors were obtained, which accounted for 99% of the variance. Eigenvalues and variances explained by these two factors are shown in Table 3. After the basic patents are classified, the targets patents are classified in the specific factor in which most of its citation belong to. There are 90 target patents which can be classified, and 4 patents are classified duplicately with the same frequency in multiple factors. The result of the basic and target patents classification is shown in Table 4. There are 56 basic patents and 19 target patents in factor 1.1 and then 48 basic
159
Y.-H. Wang et al. / Using Patent Co-Citation Approach
patents and 27 target patents in factor 1.2 respectively. Each factor is named after the titles of basic patents as shown in Table 4. Table 2. Eigenvalues and variances explained by factors Factor
Initial Extraction Sums of Rotation Sums of Eigenvalues Squared Loadings Squared Loadings % of % of Variance Cumulative% Total Variance Cumulative% Total 43.180 43.180 82.906 43.180 43.180 80.758
1
Total 82.906
2
56.528
29.441
72.621
56.528
29.441
72.621
63.928
3
23.209
12.088
84.709
23.209
12.088
84.709
39.978
4
16.117
8.394
93.104
16.117
8.394
93.104
38.7
5
6.104
3.179
96.283
6.104
3.179
96.283
44.858
6
4.207
2.191
98.474
4.207
2.191
98.474
16.889
7
2.906
1.514
99.988
2.906
1.514
99.988
30.302
Table 3. Eigenvalues and variances explained by two sub-factors of factor 1 Factor
1
Total 79.299
2
23.738
Initial Extraction Sums of Eigenvalues Squared Loadings % of % of Variance Cumulative% Total Variance Cumulative% 76.249 76.249 79.299 76.249 76.249 22.825
99.074
23.738
22.825
99.074
Rotation Sums of squared Loadings Total 68.770 63.225
2.4. Evaluating the performance of classification We use Eq. (6), Eqs. (7) and (8) in section 1.3 to calculate the value of three quantitative indicator and the result is shown in Table 5. The first indicator cover index is 22.3% and it indicates 90 out of 403 target patents are classified by PCA. However, target patents with higher cited frequency (more than 3) are totally classified in this study. The other target patents which are less cited by target patents indicates less important and can be ignored. The weight cover index is weighed by the frequency that each target patent was cited by other target patents. The value of weight cover index is 67.3%, which is better than prior studies conducted by Chen (2005) [7], Lai and Wu (2005) [3], and Yeh (2005) [8]. Thus, it can be concluded that this classification has a better performance to classify critical patents, of which the importance is measured by the frequency that was cited by target patents. The value of consistency index is 95.6%, as shown in Table 5, which shows that seldom patents were classified into multiple factors. Therefore, this study has high consistency in classification system.
160
Y.-H. Wang et al. / Using Patent Co-Citation Approach
Table 4. Names of Factors Factor
Subfactor
Factor 1
1.1
1.2
Factor 2
Factor 3 Factor 4
Factor 5
Factor 6 Factor 7
Name
Information recording medium, recording and reproducing process, method and manufacture, information management and processing. Recording medium, recording and reproducing process, method and manufacture, rewritable compact disk, erase content, recovering information, protecting copyright, identifying code, optical disc and apparatus, discriminating system. Information recording medium, defect management, maintaining data, defective area processing, information management, recording, reproducing processing, replacement process, spare area management. Image processing, graphics display, video processing apparatus, image information combine, encoding/decoding, subtitle processing, reproducing data. Electrochromic printing medium, piracy-protected recording, theft deterrent coating, RFID security for optical disc. Controlling Interactive media, plurality of data streams, multiple sources, controlling timing signal, text subtitle data synchronized, organizing data, configuration functions. Optical information medium, optical recording medium production method, production apparatus, program, and recordable optical disc. Decoding information, reproduction method and apparatus, recording apparatus and playing apparatus. Bitmap data encoding, display format, video compression method and system.
Amount of basic patents
Amount of target patents*
104
45
56
19
48
27
40
7
18
12
16
8
7
12
4
3
3
6
*4 patents are classified duplicately with the same frequency in multiple factors: one patent was classified in factor 1.1 and 1.2.; one is in factor 5 and factor 6; two patents are in factor 1.2 and factor 5.
Table 5. Quantitative indicators Indicators
Value
Cover index Weight cover index Consistency index
22.3% 67.3% 95.6%
Y.-H. Wang et al. / Using Patent Co-Citation Approach
161
3. Conclusion BD, the next generation in optical discs, offers three major improvements over DVD: a clearer, sharper image; better sound quality; and more special features. The total household penetration of all Blu-ray compatible devices is more than 72 million in U.S. homes in 2013. To sufficiently differentiate techniques among Compact Disc (CD), Digital Versatile Disc (DVD), and BD and catch up with BD technology classifications, this paper adopts the PCA classification system to extracts 8 BD patent categories. The result shows the consistency index is 95.6% and the weight cover index is 67.3% and indicates great classification performance. The result also indicates most basic and target patents are classified in factor 1.1 recording medium, recording and reproducing process, and factor 1.2 information recording medium, defect management. The result can offer explicit intelligence for patent management, technological forecasting, research planning, technological positioning and strategy making for BD technology.
References [1] Y. Wang, and B. Liu, Innovation Effect on Patent Pool Formation: Empirical Case of Philips’ Patents in Digital Versatile Disc (DVD) 3C, International Journal of Automation and Smart Technology 3 (2013), 155-167. [2] D. Archibugi, and M. Planta, Measuring technological change through patents and innovation survey, Technovation 16(9) (1996), 451–468. [3] K. Lai, and S. Wu, Using the patent co-citation approach to establish a new patent classification system, Information Processing and Management 41 (2005), 313-330. [4] The Digital Entertainment Group (DEG), DEG year-end 2013 Home Entertainment Report, January 7, 2014. [5] J. O. Lanjouw, and M. Schankerman, Patent quality and research productivity: measuring innovation with multiple indicators, Economic journal 114 (495), 2004, 441-465. [6] S. Wu, Using Patent Co-citation Analysis to Establish Patent Classification Support System- Illustrated with Foundry Industry, National Yunlin University of Science and Technology, Yunlin, Taiwan, 2003. [7] K. Chen, Using Highly Cited and Patent Co-citation to Foresight PDA Technology Trajectory, National Yunlin University of Science and Technology, Yunlin, Taiwan, 2005. [8] S. Yeh, A Study of Patent Portfolio Based on PCA - Case of TFT-LCD Panel Manufacturer, National Yunlin University of Science and Technology, Yunlin, Taiwan, 2005.
162
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-162
A Multi-agent Approach to the Maximum Weight Matching Problem a
Gang SHENa,1 and Yun ZHANG a School of Software Engineering, Huazhong University of Science and Technology Wuhan, China 430074
Abstract. The maximum weight matching (MWM) for bipartitegraphs is a fundamental combinatorial optimization problem that can be found extensively in many engineering applications such as job assignment and resource allocation. The traditional solution to MWM requires all weights to be collected on a centralized entity. In this paper, we present a multi-agent approach to solving MWM by message passing among all involved agents in a concurrent way. The proposed distributed algorithm is motivated by two related factors: first, in certain circumstances no server is available to provide centralized services, and second, the agents may not be willing to share the secret weight information to others. In the proposed approach, a MWM problem is first converted to the prime-dual linear programming setting, and a nonlinear dynamics is introduced to evolve to the solution. Each agent updates its estimates of the prime-dual variables following the dynamics, using a step parameter determined only by the size of the problem, before sending messages to other agents. In this paper, we prove the optimality, convergence of the proposed algorithm and study the experimental results by simulating some typical problem setups. The simulation experiments demonstrate the effectiveness of the proposed approach. Keywords. multi-agent, neural networks, message passing
1.
Introduction
The solution of the maximum weight matching (MWM) problem is one of the well studied in the field of combinatorial optimization and has been extensively used in resource allocation, task scheduling and other applications. In general, a maximum weight matching problem is depicted by a weighted bipartite graph, , where is a finite set of agents, is a finite set of tasks, and is a set of weighted edges connecting and . The objective of a matching is to find a subset of such that each edge in this subset assigns every agent to exactly one task, and every task to exactly one agent. The maximum matching is one in which the total weight on all assignment edges is maximized. If the numbers of agents and tasks are equal and the total cost of the assignment for all tasks is equal to the sum of the costs for each agent (or the sum of the costs for each task, which is the same thing in this case), then the problem is also called the linear assignment problem. The linear assignment problem can be casted into an integer programming problem and further be relaxed as a linear 1
Corresponding Author.
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
163
programming problem. Among many well known methods, the benchmark Hungarian algorithm may solve the maximum weight matching problem in polynomial time , where is the number of agents/tasks. Recently, messages passing algorithms have been investigated to solve the maximum weight matching problem. Belief propagation and its variation loopy belief propagation can be used to reach the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. In [1, 2, 3], the authors proposed an iterative message passing algorithm for finding the maximum weight matching in bipartite graphs based on max-product belief propagation and proved the correctness and convergence of the algorithm when the solution to MWM is unique. It turns out that the message passing , possessing the same time complexity as the algorithms may converge in Hungarian method. In this paper, we propose a multi-agent scheme to solve the MWM problem in a distributed way. In this multi-agent setting, each agent plays the identical role by communicating with other agents but updating its behavior locally, without the direct knowledge of the key parameters (weights) that should be kept secret to the owners. As we know, the Hungarian method needs to have the weight matrix in one piece and the computation is performed in a centralized way. The message passing algorithms need to store historic messages and find the maximum locally. At least two types of nodes are needed in the message passing algorithms (factor nodes, task nodes, and agent tasks) in [2, 3]. We introduce a simplified neural network that solves the relaxed LP problem using a primal-dual framework within which the computation is carried out only on agents, and no task nodes are involved. This is motivated by the facts that intelligent devices are widely available, and in many applications in particular when many agents participate, a centralized mechanism is either absent or the computation cost is prohibitively high. In the proposed method, we have only one type of nodes, agents, and each homogeneous agent plays the same role in the process. In this paper, we prove the correctness and the global convergence of the proposed approach, and also present a parameter selection method that guarantees the convergence of the Euler implementation of the multi-agent dynamics. The rest of the paper is organized as follows. In Section 2, we review the related work important to the investigation in this paper, namely the neural networks solving nonlinear problems and message passing algorithms; then in Section 3, we formulate the MWM problem and present the dynamic model and algorithm for finding the solution to MWM. The experiments are provided in Section 4 showing the effectiveness the proposed multi-agent approach. And finally in Section 5 we conclude this paper by discussing the performance of the proposal and the future research.
2.
Related Work
2.1 Neural networks for linear programming problems Traditionally, linear programming problems are solved by Danzig’s simplex algorithm that has a polynomial time complexity for the average case. However, if the
164
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
problem size goes up, the computational cost is still very expensive. Using analog neural network circuits to solve the linear programming problems of large size becomes an alternative and has attracted a lot of attention since Tank and Hopfield invented a neural network to convert the linear programming problems to a closed-loop circuit. In recent years, many researchers have studied various neural networks for finding solutions to the linear or nonlinear optimization problems in real time[6,7,8,10], taking advantage of the neural networks that they have parallel processing capacity and fast convergence properties. In general, in order to solve the constrained optimization problems (e.g. LP) with a neural network, one needs to convert the optimization problem into a dynamic system such that the equilibrium point of the system constitutes an optimal solution to the original one. The performance of the neural network constructed in this way depends on the complexity of the network circuits and the convergence speed. Specifically, a neural network based approach to LP can be modified from the method to solve the quadratic programming proposed in [6]. Consider the LP problem in the form minimize subject to and it dual: maximize subject to Then the steady solution to the following dynamic system provides a solution , where
.
and
In [7], a simplified neural network is adopted to reduce the complexity by removing the part of
Similarly, a nonlinear neural network to solve LP is given in []
to the problem maximize subject to .
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
165
2.2 Message passing algorithms for MWM In a graphical model representing the joint probability of a number of random variables, belief propagation is a powerful tool to infer the marginal and the maximum a posteriori probabilities via iterative message passing. For acyclic graphs, it is theoretically guaranteed that the belief propagation based algorithms can successfully converge to the marginal and MAP probabilities within finite steps[4]. As a result, BP algorithms achieve successful results in many applications. The extension of the classic BP to graphs with cycles, loopy BP, though not guaranteed to converge or converge to the correct probabilities, has also found its empirical successes in a wide range of applications[5]. In order to apply the LBP message passing algorithms to MWM, a bipartite graph is first converted to a graphic model such that the maximum joint probability of the nodes (random variables) coincides to the assignments corresponding to the MWM. By assigning proper distributions to the nodes and edges of the graph, then a max-product or min-sum algorithm can be used to reach the MAP solution. The maximum a posteriori probability of each node provides an assignment. If the LP relaxation of MWM is tight, this assignment is the solution to MWM. In [1], the authors investigated a LBP fashion algorithm that uses message passing to find the optimal solution to MWM. As a cyclic graph, a bipartite graph allows the steps, as proved in the paper [3]. In LBP to converge to the MWM solution in [2], the authors derived iterative message passing update rules for solving the bipartite maximum weighted matching problem. It is shown that if the optimal matching solution is unique, the algorithm converges to this optimal solution at a rate comparable to the algorithm of Bayati et. al[3]. It is shown that the two algorithms are both standard messages passing, but on dual graphs of each other. Also, the algorithm presented here requires less storage. We also provide a method to use the proposed algorithm to solve the integer Maximal Weighted Matching problem – i.e., where the optimal solution is generally not unique. On a bipartite which is considered a complete weighted undirected graph, let be the set of agent nodes, and be the set of . Then the joint task nodes. Each node may randomly pick a number from and probability of random variables in U and V taking values on the bipartite graph is given by the following . If we set the pair-wise potential function as
and the unary potential function as and then the is the same as the solution to MWM as long as there is a unique solution to solution.
166
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
Then we may ay LBP to pass messages between the agent nodes and task nodes on the bipartite until the believes converge. Message sent from an agent
to a task
Similarly, message sent from a task
Then the believes of
and
at time instant
to an agent
is defined as
at time instant
is defined as
are respectively given by
and In [4] and [5], it proves that the believes will converge using the representation of computation tree.
3.
Model and Algorithm
Consider a bipartite graph , where is the agent node set and is the task node set, an edge is associated with a nonnegative real weight . A match is a subset of if no member in is associated with any node with another edge. The objective of MWM is to find a matching for which the sum of the weights of edges is as , maximized. If we denote the collection of all permutation matrices of size the solution to MWM is
and
Or, equivalently, denote the vectors , and the matrix
Finding
,
can be achieved by solving an integer programming problem subject to and
.
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
167
If we replace the condition with , it becomes a linear programming problem. When the integer program has a unique solution, the LP relaxation solution is identical to the that of the integer program. The primal and dual of this LP relaxation is as follows (LP) subject to . and
where and in
subject to , and is the vector of shadow prices associated with the nodes ,
are the shadow prices on agent nodes and are the shadow prices on the task nodes.
We introduce a neural network (simplified from [9,10]) following a nonlinear dynamics to solve the primal and dual linear programming problems. We note that the dynamics of primal and dual variables involve nonlinearity: changes in either of (Mode 1) and (Mode 2), the two linear modes, is greater than 0 to switch depending on whether the value of to denote the between these two modes. In the above statement, we use row of the matrix . (NLD)
, then it is obvious to If we always start from a nonnegative initial point . And when a variable falls into Mode 2, it is equivalent to saying notice that and
.
Theorem 1. (Optimality of the equilibrium point) The equilibrium point of the system (NLD) is an optimal solution to the problem LP. Lemma.
if and only if all
.
Then we are able to derive the convergence property of the NLD system. Theorem 2. (Globally convergence) Starting from any nonnegative initial points, the NLD system will converge to its equilibrium. based In the multi-agent framework, we let each agent update its selection of on the information it receives from other agents. There is no need to have task nodes in the communication process, while the updating of is allocated to the agents (see Figure 1, shaded Y nodes are actually delegated to the shaded X node). In other words, agents are X nodes: after receiving from Y nodes (delegated to agents), calculte , and subsequently send to the agents nodes
168
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
representing Y nodes; on the other hand, Y nodes (actually located on agent nodes): , it sends the computed to X nodes. The message passing after receives algorithm is listed in Table 1.
Y nodes
X nodes/agents
Figure 1. the message passing scheme between agents, the shadowed nodes are identical in implementation
Step 3
Table 1. message passing algorithm of an agent Initial values: select Update , to other agents and Update
Step 4
agents If all
Step 1 Step 2
,
, send
, and send
,
to other
converge, stop; otherwise goto step 2
In the implementation of the NLD differential equations, we apply the basic Euler form
. Let then
, and
,
if
is in mode 1 at time instant ,
. Denote
, then we have the following result concerning
the simple Euler implementation of the differential equations. Corollary. (Euler method convergence) If we select parameters and such that has eigenvalues norm less than or equal to 1 at all time points , then the following difference equations will converge to the equilibrium globally.
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
169
A pair of parameters and ( is the problem size, i.e., the number of the agents) satisfy the above condition, and in the experiments, we will use these parameters to demonstrate the behavior of the proposed approach.
4.
Experiments
In this section, we discuss the performance of the proposed method using ; experiments under several distinct dimensions: the selection of parameters ; the dimensionality of the problem; and different weight matrices. initial values To demonstrate the behavior and properties of the proposed method, we also conduct experiments on randomly generated weight matrices for different problem settings. weight matrix
First look at a
. and . We start from the infeasible initial points and . In Figure 4, we use the In Figure 2, the parameters are chosen as suggested parameters , . We notice that the curves of on the left in Figure 2 is much smoother than those on the right. Though the curves in the two settings are in very different shapes, but they all converge to the same equilibrium, i.e., the optimal solution of the corresponding MWM problem, while the parameter selection proposed in this paper has a faster convergence speed. Now let us examine the influence of different initial values. Look at another weight matrix
3.5
3
2.5
x
2
1.5
1
0.5
0 0
10
20
30
Figure 2. curves of
40 iteration
50
60
70
80
, K=1, dt=0.1 (left) and K=1, dt =0.7071 (right), weights specified in
Figure 3 shows the iteration process for feasible starting point , and the iterations for infeasible starting point and and . As we know the proposed NLD is independent of the initial values, starting points only make difference for the beginning stage of the iterations. In both cases, K=1 and dt=0.7071. Since the difference between the optimal MWM (0.01 versus 0.06), it takes more solution and the other match is smaller than . iterations to converge for
170
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
Figure 3. feasible starting point
and infeasible starting point for weights
Figure 4. iterations of
(left) and the Lyapunov function
for weights matrix
We also test the randomly generated weight matrices with each weight uniformly and the Lyapunov distributed in (0,1). Figure 4 shows the iteration processes of function respectively, with the parameters K =0.8165, dt =0.5774 for a weight matrix . It is worth mentioning that the Lyapunov function decreases very slowly after the very few first steps when the ordering of x values are determined.
Figure 5. iterations of
for a
weight matrix
G. Shen and Y. Zhang / A Multi-Agent Approach to the Maximum Weight Matching Problem
171
Figure 5 shows the converging process for with respect to a randomly generated weight matrix. As stated in [3], the convergence is mainly decided by the difference of the optimal and second best matches. Since we use the step size , the convergence rate is slower than that of the LBP based algorithm in which the nonlinear property of message and belief computation plays a key role.
5.
Conclusions
In this paper, we presented a simplified nonlinear dynamics consisting of two switching linear modes to solve the primal-dual form of a linear program relaxation of the maximum weight matching problems. We proved that the system proposed in this paper converges to the equilibrium and this equilibrium is a solution to the LP and its dual. A multi-agent approach to solving MWM is also presented in which each participating agent is responsible to update the assignment variables and the shadow price related to itself as well as represent another shadow price associated with a task. By removing the need of task nodes in the interaction process, the MWM problem can be attacked in a distributed way and the computation on the agents is easy to implement either in simple software algorithm or with low cost hardware. This result may be applied to the problems of the collaboration of a group of intelligent devices such as wireless sensor networks. Future work will include improve the performance of the algorithm by reducing of running time by introducing time varying steps, as well as extending the result to the cases of noisy communication channels. Acknowledgement The work in this paper was partially supported by a NFSC grant 61073095.
References [1] Sujay Sanghavi, Dmitry M. Malioutov and Alan S. Willsky, Linear Programming Analysis of Loopy Belief Propagation for Weighted Matching, NIPS 2007 [2] Yuan-sheng Cheng. Iterative Message Passing Algorithm for Bipartite Maximum Weighted Matching, pp1934-1938, ISIT 2006, Seattle, USA, July 9 - 14, 2006 [3] Mohsen Bayati, Devavrat Shah, and Mayank Sharma. Max-Product for Maximum Weight Matching: Convergence, Correctness, and LP Duality. IEEE Transactions on Information Theory, VOL. 54, NO. 3, March 2008 [4] S. Lauritzen. Graphical models. Oxford, U.K.: Oxford Univ. Press,1996 [5] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, vol. 12, pp. 1–42, 2000. [6] X. Wu, Y. Xia, J. Li, W. Chen. A high performance neural network for solving linear and quadratic programming problems, IEEE Transactions on Neural Networks 7 (3) (1996) 643– 651. [7] Hasan Ghasabi-Oskoei. NezamMahdavi-Amiri. An efficient simplified neural network for solving linear and quadratic programming problems. Applied Mathematics and Computation 175 (2006) 452464 [8] A. Malek, A. Yari. Primal–dual solution for the linear programming problems using neural networks. Applied Mathematics and Computation 167 (2005) 198–211 [9] Khanh V. Nguyen. A Nonlinear Neural Network for Solving Linear Programming Problems [10] Xia et al.: Novel Recurrent Neural Network for Solving Nonlinear Optimization Pproblems IEEE Transactions on Neural Networks VOL. 19, NO. 8, August 2008
172
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-172
Model of Product Definition for Meeting the RoHS Directive a
José Altair RIBEIRO DOS SANTOSa,1 and Milton BORSATO a Federal University of Technology – Parana, Av. Sete de Setembro 3165, Curitiba, PR 80230-901, Brazil
Abstract. The development of sustainable products is a full growth area due to current legislation, mainly in Europe and the United States. Electronics products have increasingly short life cycles, which imposes the need to exchange information more dynamically, accurately and timely, for preventing failures in the survey, interpretation and application of information during the stage of product development, ensuring compliance to existing regulatory frameworks. Products need to be designed based on structured information. Capturing the desires and needs of customers and turning them into requirements and design details for the purpose of compliance to the RoHS regulatory framework is a complex task due to the multitude of information, not always correlated. Hypothetically, the use of a single definition model that captures a spatial, behavioral and procedural description of the product can help reduce the current waste of resources and quality assurance in the development process. The present work aims to develop a product definition model for meeting the RoHS directive. This is done by proposing a formal ontology that potentially promotes interoperability of information systems that are used throughout the production chain in the electronics industry. The model is under development and is to be checked for suitability in the context of the Brazilian electronics industry. Keywords. Ontology, RoHS, Knowledge Model
Introduction The Integrated Manufacturing Technology Initiative (IMTI) defined environmental sustainability as one of the "Great Challenges" for the success of manufacturing in the 21st century [13]. Environmentally friendly products are more popular than ever, since consumers are more aware of future scenarios of scarce resources shared by an increasing world population [18]. In this scenario, companies create strategies not only to seize market opportunities but also to reduce production costs [12; 17]. In addition, regulatory agencies have established guidelines for hazardous emissions, product disposal and energy consumption [5]. Recent changes in environmental requirements and increasing legislative pressure have forced this industry segment to consider the integration of ecological concepts into design and product development, which certainly involves different actors in the supply chain [23]. On the other hand, multiple tools, distinct in their design, purpose, structure, application and property, assist in the capture, processing and application of information throughout the life cycle of electronic products [19; 26]. Thus, the existing 1
Corresponding author. Tel.: +55-41-3268-3207; mobile: +55-41-9203-6202; e-mail:
[email protected].
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
173
challenge is to provide the necessary information to meet regulatory frameworks, such as RoHS, at the appropriate time, stakeholders and most convenient way (i.e. following lean principles in product development). This can be overcome by developing information models to enable semantic interoperability [15; 21]. The present work is part of a framework project called Intelligent Manufacturing Program. This program is based on ten so-called “Imperatives of Intelligent and Integrated Manufacturing, or demands proposed by IMTI, for the Model-based Manufacturing Enterprise. Demand number 3, which refers to the definition of intelligent product models, sets the challenges to create a complete, intelligent, product definition model capable of driving all downstream applications [20]: “Imagine proceeding from wants and needs directly to requirements, to concepts, to designs and to an unambiguous, timeless, computer-sensible set of models that drive and support all stages of the product’s lifecycle. The models capture the full spatial, behavioral, and process description of the product and represent the singular, authoritative source of truth for the product definition.” According to [20], the challenges of this demand are: x developing new approaches and ideas for modeling, far beyond what has already been developed; x reaching the large scope of work, in which all areas of knowledge and product types are included; x thoroughly understanding the information needs in order to direct subsequent applications; x disseminating abstractions and introducing intelligence in product models; and x managing the relationships between models through the concept of integrated models. Amongst the various areas of knowledge and information needs indicated by the challenges reported by [20] there is that the subject of sustainability and compliance to regulatory frameworks such as RoHS. Companies have been compelled to assess relevant information on material use at the right moment and depth at certain stages of a product’s lifecycle. In other words, there is a growing need to create models that could provide information for preventing the misuse of hazardous materials in product specifications, avoiding the need for changes in the final stages of product design. Chandrasegaran et al. [6] propose a model that is suited for simulating both manufacturing and use of the product. It assumes the form of an ontology, built with semantic modeling techniques. Kim et al. [14] also present an ontology-based model, but for capturing and providing information regarding product assembly. The model allows information sharing, which can foster collaboration between design and manufacturing, focused on propagating constraints and specific requirements of assembly lines for the product design realm. Chen et al. [7] present a model for defining multi-level assemblies, which enables the transfer of information between different stages of design for manufacturing, using a top-down approach for capturing information at different levels of abstraction. The works cited above exemplify definition models, but do not reflect the needs and requirements of regulatory frameworks such as RoHS, which envision the creation of sustainable products. The purpose of the present work is, therefore, to propose a semantically rich model for product definition that could embed RoHS-related requirements, which may subsequently be a part of a larger model, capable of achieving any desired amplitude in the scope of the Intelligent Manufacturing Program.
174
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
This article is organized as follows. Section 1 presents a theoretical background. Section 2 explains how the current research is being conducted. Section 3 presents further details on the tool and language used in the project. Section 4 brings the expected results. Section 5 brings the preliminary results. Finally, section 6 presents the final remarks.
1. Theoretical Background Three topics are considered essential in the present investigation: product lifecycle metamodels, product definition models and RoHS, all of which are described in the present section. 1.1. Product Lifecycle Metamodels According to Van Gigch [25], models are representations resulting from a process of converting our view of reality. Examples of models are from a plant of a residence, up to a flowchart that represents an algorithm or a mock-up on a new type of vehicle foam. On the other hand, if each distinct template following specifications would never be able to compare different models. Therefore, it is necessary to define the requirements that must be followed in the modeling process. Metamodels specify how specific models can be constructed. In other words, metamodels are models for modeling processes. Many authors have proposed product lifecycle metamodels, although not always under the upper cited concept [1; 9; 24]. Most of them keep similar features, such as a phase-gate approach and deliverables at each gate. The present work has been based on the metamodel presented by Back et al. [1], not only for it provides the necessary framework for accommodating activities related to the elicitation of RoHS requirements and their application, but also for it has been largely used in other scientific works regarding the Brazilian industry. According to Back et al. [1], designing a sustainable product that meets the readiness of a Regulatory Framework, such as as the RoHS [4], requires a tool for capturing the needs of the society (as a stakeholder), raised in the Informational Design Phase, which will then be used in the subsequent phases, such as Conceptual, Preliminary and Detailed Design. 1.2. Product Definition Models Conceiving an intelligent model for product definition means deploying a tool in computational medium that supports all stages of the product lifecycle. It links diverse perspectives of the product, such as those relating to manufacturing, functional descriptions and requirements for meeting regulatory demands. It is a specific model for representing a product, one which is able to bridge specifications rose in the Informational Design phase to geometric details that are to be used in the Detailed Design phase.
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
175
Figure 1 illustrates the interconnection between phases of a product lifecycle and the corresponding forms of knowledge representation, as proposed by Chandrasegaran et al. [6].
Figure 1. Knowledge Representations in Product Design Source: Chandrasegaran et al [6].
According to Borsato [2], “product lifecycle management strategies require that all phases be integrated by means of seamless, reliable and relevant information exchange. However, islands of information still persist, for information systems that are used throughout the entire cycle have not been developed to allow semantic interpretation of data, which invariably leads to great losses due to data replication and ambiguity issues”. Semantic interoperability is the ability of two or more heterogeneous and distributed systems work together, sharing information between them with a common understanding of its meaning. A semantic model of product definition based on ontologies is one of the most promising possibilities for ensuring semantic interoperability of software applications. This is because it is able to express the explicit and implicit information in a structured way, in addition to providing a common vocabulary with a well-defined semantics [10]. Chungoora and Young [8] justify the use of ontologies as an effective way to model intelligent definition is to use common logic based ontology, because it provides knowledge sharing through the use of a semantic core structure along with the use of a set of syntactic forms, seeking to capture concepts related to a product’s lifecycle. 1.3. RoHS Directive The RoHS directive restricts the use of hazardous substances in electronic devices. Amongst these substances, are cadmium, mercury, lead, hexavalent chromium and flame-retardants, such as polybrominates biphenyls and Polybrominates diphenyl ethers (PBB and PBDE). Manufacturers of electronic products and their supply chain should have these substances monitored for receiving a certificate of conformity, which enables them to enter the markets of 25 EU countries and several U.S. states [4]. Many works in the scientific literature highlight the importance of projects that aim at sustainability. In the large picture, they confirm the need for developing models that enable the incorporation of regulatory requirements into the main form of product
176
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
representation, for creating more sustainable products. Therefore, a product definition model in form of an ontology to develop electronic devices that meet the RoHS Directive can assist businesses, ultimately aiming to provide the developer with the necessary information at the right time and most convenient way, thus being in agreement with the principles of Lean Thinking [16].
2. Methodological Aspects Model construction was based on the synthesis of two methods of creating ontologies: Kactus and Uschold & Gruninger demonstrated in Santos [22]. The present work has been planned to be carried out in the form of work packages (WPs), each with a number of activities with their corresponding deliveries. They are described in the following sections. 2.1. WP1.0 - Survey of State-of-the-Art WP1.0 deals with the status of up-to-date and relevant scientific research. It is divided into three sub-packages: WP1.1 – Product Representations; WP1.2 – RoHS Directive; and WP1.3 – Semantic Interoperability. 2.1.1. WP1.1 – Product Representations This sub- package is further divided into 4 steps: x WP1.1.1 – Lifecycle stage criterion: search for suitable forms of representation according to the lifecycle of electronic products; diagrams that contains the raw materials and processes used throughout a product’s lifecycle, per stage will be delivered; x WP1.1.2 – Knowledge area criterion: search for suitable forms of representation according to the knowledge areas involved in the design of electronic products; will deliver a block diagram containing the relationship of each knowledge areas and the various product representation forms; x WP1.1.3 – Purpose criterion: search for suitable forms of representation according to the functions encountered in electronic products, and how functional models fit together; x WP1.1.4 - Technology criterion: search for suitable forms of representation according to the technology employed in the design of electronic devices; diagrams that relate technologies, tools and machinery will be delivered; fabrication technologies will also be researched. 2.1.2. WP1.2 – RoHS Directive This sub-package is further divided into 3 steps: x WP1.2.1 – Features: the 2011/65/EU directive will be examined in the official journal of the European Union, and a structured list of RoHS features will be presented as a deliverable; x WP1.2.2 – Methods and assessment tools: currently used procedures will be surveyed, seeking to elicit the tools used for measuring the presence of hazardous substances;
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
x
177
WP1.2.3 – Certification processes: currently used certification procedures will be seeked in reference laboratories.
2.1.3. WP1.3 – Semantic Interoperability This sub-package is further divided into 2 steps: x WP1.3.1 – Existing solutions: search for solutions to solve the problem of semantic interoperability will be carried out; structured list will be delivered; x WP1.3.2 – Implementation Strategies: search for strategies will be carried out; findings will be delivered. 2.2. WP2.0 – Model Construction This work package deals with the construction of the template definition in the form of an ontology. It is divided into two sub-packages: WP2.1 – Preliminary Model and WP2.2 – Detailed Model. 2.2.1. WP2.1 – Preliminary Model This sub-package is further divided into 2 steps: x WP2.1.1 – Determination of Requirements: requirements will be derived from the results of WP1.0; x WP2.1.2 – Definition of Architecture and Modularization: a high-level ontology will be defined and reusable components (modules) will be determined. 2.2.2. WP2.2 – Detailed Model This sub-package is further divided into 2 steps: x WP2.2.1 – Detailing of modules: modules of the ontology will be built, by detailing class, subclass, attributes, properties and instances; x WP2.2.2 – Integration of modules: ontology modules will be assembled and possible ambiguities will be resolved. 2.3. WP3.0 – Model Validation This work package deals with procedures to be used to validate the ontology and its components. It is divided into two sub-packages: WP3.1 – Definition of Proof of Concept; and WP3.2 – Application of the Model. 2.3.1. WP3.1 – Definition of Proof of Concept This sub-package is further divided into 3 steps: x WP3.1.1 – Determination of Selection Criteria: the criteria for choosing a case study will be delivered; x WP3.1.2 – Choose Test Case: the product for testing the proposed ontology will be selected based on the criteria raised in the previous package; x WP3.1.3 – Formulation of Proof of Concept: a number of scenarios and potential questions that the ontology should answer will be raised.
178
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
2.3.2. WP3.2 – Application of the Model The sub-package is further divided into 4 steps: x WP3.2.1 – Preparation for Model Implementation: the data for populating the model (i.e. instances) will be prepared by means of queries; x WP3.2.2 – Test Execution: queries will be processed and results, registered; x WP3.2.3 – Gathering of Results: the answers for the set of created queries will generate a final report; x WP3.2.4 – Analysis of Results: test results will be compiled to lead to potential findings, such as the efficiency of the product definition model.
3. Tool and Language The chosen tool for editing ontologies is Protégé version 4.3 [11]. Protégé is a software tool that has been used by an active community, including school scientific research and industrial projects in over 100 countries. Protégé is based on Java with a development environment that provides flexibility and consistency for rapid prototyping and development and applications [3]. Built-in reasoners (e.g. FaCT++) will be used as well. The chosen language is OWL, which is based on Extensible Markup Language and RDF. OWL is a widely accepted way to represent domain knowledge, employs the concepts of classes, objects and properties to create restrictions and axioms [3].
4. Expected Results How can RoHS requirements be incorporated in a product definition model? How can such a model be used throughout a product’s lifecyle? The answers to such questions are expected to be delivered. For that purpose, a set of classes and properties that can semantically describe concepts and their relationships will be produced. Object properties will be developed hierarchically, to establish hierarchical, topological or taxonomic relationships. The arrangement of properties and restrictions for each class and the construction of the necessary axioms are under development. After defined, the ontology is to be validated by means of an application example.
5. Preliminary Results After completing the survey phase state of the art, scenarios that simulate the application were built. Through this scenarios, relevant for application terms were identified and from them created a list of classes that will compose the reference ontology. Two superclasses: DomainConcept and ValuePartition, were created and evaluated components already used in the work of Borsato [2].
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
179
Table 1 shows the two partitions created to subdivide the categories of components and classes created and reused. Table 1. Partitions, new Classes and reused classes. Partitions
Definition
Partição – DomainConcept
This partition contains the classes which correspond to the domain concepts that the ontology is intended to represent.
New Classes
This partition contains classes related to DomainConcept classes through object properties. ValuePartition classes are associated with a restricted set of individuals. Reused Classes
ElectroEletronicProduct
Artifact
ElectroEletronicComponents
AssemblyUnitProcess
Tecnology
UnitOfMeasure
MecanicalComponents
Energy
RohsDirective
Feature
Customers
ManufacturingUnitProcess
Designer
Material(HazardousSubstances)
LifeCycle
Property
KnowledgeAreas
Resource
Industry
SupplyChainOperationProcess
Partição – ValuePartition
The classes are divided into subclasses with key terms found in the theoretical framework. In figure 2 is shown the example of two new subclasses created through the use of Protégé software, RohsDirective class and HazardousSubstance class (Sublclasse the reused class, Materials).
Figure 2. Class RohsDirective and class HazardousSubstance Source: Author.
180
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
The RohsDirective and HazardousSubstance classes are related through the axiom: An individual class HazardousSubstance is related to an individual of class RohsDirective by property isHazardousSubstanceAccordingForRohs. Work is underway with completion scheduled for August 2014. The objective is to date have set all the properties of data and objects, axioms and have performed tests on a real product by creating queries in Protégé software.
6. Final Remarks The area of electronics is of extreme importance for reducing the environmental impact of human activities. An environmentally friendly industry can set new standards for the conscious use of raw materials and natural resources. The imperative need to treat solid waste and minimize the application of hazardous substances in products justifies the creation of a product definition model that integrates RoHS-targeted information. The proposed model, in the form of an OWL ontology will certainly add scientific knowledge for currrent researchers in the area. Future generations of computer-aided tools may be developed with an integration mind set that will incorporate semantic modeling. The concepts are of immediate value to industry, which needs to be well supported to meet European directives. The present work is under development work. Once completed, it will mount provide useful tools to be tested against real products in the electronics industry.
References [1] N. Back, A. Ogliari, A. Dias, and J.C.d. Silva, Projeto integrado de produtos, Planejamento, concepção e modelagem (2008). [2] M. Borsato, Bridging the gap between product lifecycle management and sustainability in manufacturing through ontology building, Computers in Industry 65 (2014), 258-269. [3] M. Borsato, C.C.A. Estorilio, C. Cziulik, and C.M.L. Ugaya, An ontology building approach for knowledge sharing in product lifecycle management, International Journal of Business and Systems Research 4 (2010), 278-292. [4]
L.A. Cairns, Ensuring RoHS 2 success with agility, Solid State Technol. 56 (2013), 33-33.
[5] A. Chaker, K. El-Fadl, L. Chamas, and B. Hatjian, A review of strategic environmental assessment in 12 selected countries, Environmental Impact Assessment Review 26 (2006), 15-56. [6] S.K. Chandrasegaran, K. Ramani, R.D. Sriram, I. Horváth, A. Bernard, R.F. Harik, and W. Gao, The evolution, challenges, and future of knowledge representation in product design systems, Computer-Aided Design 45 (2013), 204-228. [7] X. Chen, S. Gao, Y. Yang, and S. Zhang, Multi-level assembly model for top-down design of mechanical products, Computer-Aided Design 44 (2012), 1033-1048. [8] N. Chungoora and R.I.M. Young, The configuration of design and manufacture knowledge models from a heavyweight ontological foundation, International Journal of Production Research 49 (2011), 4701-4725. [9] R.G. Cooper, Stage-gate systems: a new tool for managing new products, Business horizons 33 (1990), 44-54.
J.A. Ribeiro dos Santos and M. Borsato / Model of Product Definition
181
[10] F. Fonseca, M. Egenhofer, and K.A. Borges, Ontologias e interoperabilidade semântica entre SIGs, in: II Workshop Brasileiro em Geoinformática-GeoInfo2000, Proceedings. São Paulo, 2000. [11] J.H. Gennari, M.A. Musen, R.W. Fergerson, W.E. Grosso, M. Crubézy, H. Eriksson, N.F. Noy, and S.W. Tu, The evolution of Protégé: an environment for knowledge-based systems development, International Journal of Human-computer studies 58 (2003), 89-123. [12] T. Gutowski, C. Murphy, D. Allen, D. Bauer, B. Bras, T. Piwonka, P. Sheng, J. Sutherland, D. Thurston, and E. Wolff, Environmentally benign manufacturing: Observations from Japan, Europe and the United States, Journal of Cleaner Production 13 (2005), 1-17. [13] I.M.T.I. IMTI, Manufacturing Success in 21 st century: A Strategic View, in, Oak Ridge, Tennessee: IMTI, Inc, 2000. [14] K.-Y. Kim, D.G. Manley, and H. Yang, Ontology-based assembly design and information sharing for collaborative product development, Computer-Aided Design 38 (2006), 1233-1250. [15] A.F.A. Lopes, F.J.M. Couto, and M.J.G.d. Silva, A tool for ontology instance matching, in: U.d.L. Mestrado em Informática, ed., Lisboa, PT, 2013, p. 90. [16] J.M. Morgan and J.K. Liker, The Toyota product development system, Integrating people, process, and technology. Charlotte, NC: B&T (2006). [17] R. Nidumolu, C.K. Prahalad, and M. Rangaswami, Why sustainability is now the key driver of innovation, Harvard business review 87 (2009), 56-64. [18] K. Peattie, Green Consumption: Behavior and Norms, in: Annual Review of Environment and Resources, Vol 35, A. Gadgil and D.M. Liverman, eds., Annual Reviews, Palo Alto, 2010, pp. 195-228. [19] M.C. Pedroso and R. Zwicker, Product information management: basis for relationships in the supply chain/Gestao da informacao de produtos: base para os relacionamentos na cadeia de suprimentos, Journal of Information Systems & Technology Management 5 (2008), 109+. [20] V. Quintana, L. Rivest, R. Pellerin, F. Venne, and F. Kheddouci, Will Model-based Definition replace engineering drawings throughout the product lifecycle? A global perspective from aerospace industry, Computers in Industry 61 (2010), 497-508. [21] M. Rio, T. Reyes, and L. Roucoules, Toward proactive (eco)design process: modeling information transformations among designers activities, Journal of Cleaner Production 39 (2012), 105-116. [22] K.C.P.d. Santos, Utilização de ontologias de referência como abordagem para interoperabilidade entre sistemas de informação utilizados ao longo do ciclo de vida de produtos, Mestrado em Engenharia Mecânica Dissertação, Universidade Tecnológica Federal do Paraná, 2011. [23] V.D.R.N. Scandelari, J.C. Da Cunha, V.d.R.N. Scandelari, and J.C. da Cunha, Ambidextrality and the socioenvironmental performance of companies in the electro-electronic sector/Ambidestralidade e desempenho socioambiental de empresas do setor eletroeletronico/Ambidiestralidad y desempeno socioambiental de empresas del sector electro electronico.(articulo en portugues), RAE 53 (2013), 183. [24] G. Schuh, H. Rozenfeld, D. Assmus, and E. Zancul, Process oriented framework to support PLM implementation, Computers in Industry 59 (2008), 210-218. [25] J.P. Van Gigch, System design modeling and metamodeling, Springer, 1991. [26] H. Wang, A.L. Johnson, and R.H. Bracewell, The retrieval of structured design rationale for the re-use of design knowledge with an integrated representation, Advanced Engineering Informatics 26 (2012), 251-266.
182
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-182
Application of Knowledge-based Engineering in the Automobile Panel Die Design Jiafu Wena ,Wei Guoa and Zhenhai Wangb, 1 School of Mechanical Engineering, Tianjin Key Laboratory of Advanced Manufacturing Technology and Equipment, Tianjin University b College of Management and Economics, Tianjin University, Tianjin, China a
Abstract. Automobile panel die is the typically customized parts in the car industries requiring many designers and engineering knowledge to meet the changing requirements. Meanwhile, industrial companies are always being exhorted to become more competitive by reducing the lead time and costs for their products to survival. KBE technology is a relatively young technology with competitive advantage for design application to reduce the lead-time drastically. This paper addressed the structure of the automobile panel die design system based on the KBE technology in details. The system operates by creating a unified knowledge base to store the scattered knowledge among the design process from the experts and engineering knowledge resources through coding, which contained in the expert’s brain and technical literature. It helps designers to make appropriate decisions by supplying necessary information at the right time through query and inference engine to represent the knowledge within the KBE application framework. We mainly use the RBR and CBR retrieving mechanism to reach the best solutions. In the paper, an example was given to illustrate the operation process and prove its validity. Keywords. Artificial intelligence, Knowledge-based engineering, Knowledge acquisition, Reasoning mechanism
Introduction Sheet-metal parts with freeform geometries (e.g. hood panels and fender panels) has many advantages such as little material waste and high productivity, and are used to replace the expensive and forged products in mass production in the automobile industries. Meanwhile, increasing competitive and customizing demanding are forcing automobile companies to search for means to decrease time and costs, at the same time, keep the high quality. And also, people are gradually come to realize that design process is the most significant factor which determines the cost, quality, and life cycle of the product. Since in the design stage, decisions are made continuously to influence the downstream process during the product life cycle. This evokes us to pay much more attention to the design stage to make correct decisions avoiding conflicts and errors. In this situation, it’s necessary to constitute a 1
Corresponding Author.
J. Wen et al. / Application of Knowledge-Based Engineering
183
system which can utilize the requirements, experience, expertise and rules to provide appropriate suggestions. In the design domain, KBE is the most common way to support customization and automotive design that could shorten the lead time, also improve quality and profit. The design of the die of automobile panel is a task normally carried out by the structure and process designers which are time-consuming and boring. KBE has strong roots in the field of artificial intelligence (AI), particularly in the knowledge based systems (KBSs) technology of the 1970s. KBE is a technology based on the use of dedicated software tools called KBE systems, which are able to capture and systematically reuse product and process engineering knowledge, with the final goal of reducing time and costs of product development by means of the following:(a) Automation of repetitive and non-creative design tasks.(b)Support of multidisciplinary design optimization in all the phrase of the design process[1]. Adopting KBE can be very beneficial to increase the ability to innovate, get products to market faster with reducing errors and the final cost. As the definition of KBE states, one of the hallmarks of the KBE approach is to automate repetitive, noncreative design tasks [2]. As Baxter et al. note that around 20% of the designer’s time is spent searing for and absorbing information. Furthermore about 40% of all design information requirements are currently met by personal stores, even though more suitable information may be available from other sources[3]. This implies that it’s important for the knowledge base to be easily accessible and could be shared by the designers. Nowadays, many researchers have applied the KBE technology to make the product more effectively. There have been many examples of KBE being successfully used to help design products that require knowledge from various activities during the whole design process. For example㧘Chapman and Pinfold described a knowledge based engineering systems to extend the capabilities of BIW engineers. The systems allow output to respond dynamically to changes within a rapid timeframe and to assess the effects of change with respect to the constraints imposed upon by creating a unified model description that queries rules[4]. H.Z.Yang has presented a KBE methodology for ship structural member design. It achieves knowledge reuse and accumulation, provides reliable technical support for ship design quality[5]. Textron Aero structures announced the deployment of a tooling design application that delivered a 73% reduction in design time. Large automotive and aerospace companies, such as Boeing and British Aerospace, Jaguar and General Motors, have brought in knowledge-based engineering technology and received great achievement, for example, Jaguar have reduced the time to design an inner bonnet from eight weeks to 20 minutes by developing a Knowledge Based Engineering system, whilst Boeing have published that approximately 20,000 parts for the 777 aircraft have been designed using this technology [6]. Thinking about the KBE applications mentioned above, it is meaningful applying the KBE methodology to the die design of automobile. 1. Framework of knowledge-based engineering Knowledge-based engineering supply us with a method to integrate maturity design experience, design parameters based on experimental data, material testing, user’s feedback and relevant design standards and norms into the system through the logical judgment and deduction, achieving product intelligent design. In this paper, the
184
J. Wen et al. / Application of Knowledge-Based Engineering
knowledge relevant with design process is stored, especially about the structure design. And also the reasoning mechanism applied to get the most appropriate solution or calculation results for the designers. The framework of the KBE system can be shown in fig.1.
Fig.1. The KBE design framework of the automobile panel die
2. Knowledge acquisition of the automobile die design The first step to apply the KBE methodology is capturing the field knowledge from diverse resources. Knowledge acquisition is the activity of capturing expertise from people (and other sources of knowledge) and creating a computerized store of this knowledge to be used to help an organization in some specified ways [7].The resources of design knowledge always include documents, books, websites, experienced designers, experts, databases, experiments, successful precedents, design drawings and even the feedback information from the manufacturers and sellers. Because humans are unable to directly extract knowledge from date, the process must use a data mining technique to abstract information from date. In other words, we compile and analyze the design tracing data, and form relations between the modification statistics and their causes. There have been many development methodologies suggested for the KBS domain, such as KADS [8-9]. These methodologies have been aimed at assisting the developer define and model the problem in question. The basis of knowledge representation in a computer is the organization and storage of knowledge, which an expert system then uses to solve a problem [10]. It’s the process to manage the knowledge by machine interpretative methods, which utilized the facts and relationships between them. Generally, which method should be chosen for knowledge representation depends on the type of knowledge and its operation mechanism.
J. Wen et al. / Application of Knowledge-Based Engineering
185
The knowledge used in the design can be simply divided into two kinds: “the explicit knowledge” and “the tacit knowledge”. The first category of knowledge is visible, written, transferable, sharable, and reusable. It is usually documented and stored and transmitted externally to a human brain. The second category includes “procedural knowledge” or the best practices, which are usually implicit and context sensitive. This kind of knowledge is related to processes, methods and practices in groups and professions. It needs to be identified, captures, and made explicit to allow it to be shared. However, it is not always well documented [10]. Benefits can be captured if this type of knowledge can be captured and made explicit. For different types of knowledge, we apply diverse methods to take advantage of them to address the complicated and difficult problems. In this paper, we mainly use the object-oriented hierarchical database to define the system, which is utilized for panel die components and production rules [11]. For the tacit knowledge, we compile and store it in the case bases in the unified format, thus we can retrieve, share and reuse the knowledge contained in the successful precedents using the CBR mechanism. In this paper, the knowledge mainly come from two methods, the first manner is acquired by interviewing with experts, the second one is acquire the knowledge from established engineering resources, such as engineering texts, handbooks, literature, engineering databases, etc. Comparing with the first method, the second one ensures the knowledge more consistent, more objective, and has better quality because all knowledge within them has been validated by practice. We collect the data, from the vast sources and distill the information embedded in it, then up to the knowledge level which is a complex and time-consuming process. Modeling and elicitation software tool like PCPACK can make KA process more effectively [12]. The steps of knowledge acquisition can be described as in Fig. 2.
Fig.2. The steps of knowledge acquisition
3. Knowledge representation and knowledge coding Knowledge coding makes the saving, retrieval and re-application of the knowledge of enterprises more convenient and effective. The general practice is to try hard to transform knowledge into explicit, portable, understandable and organized codes. The generally accepted principles knowledge coding include: (1) determination of the target; (2) identification of different forms of knowledge to reach specific goals; (3) selection of suitable codes and useful knowledge after coding; and (4) use of suitable media to carry out coding and transmit knowledge, to facilitate sharing and application. Since there is so much related engineering knowledge, not all the knowledge needs to be coded. Hence, in the strategy of code selection, relativity is generally more important than completeness. General codes are normally divided into large categories,
186
J. Wen et al. / Application of Knowledge-Based Engineering
medium categories, small categories, sub-categories, sub-items, etc. Dealing with knowledge frameworks, this study considers the interrelationship between shape and different structures, and establishes the codes of different attribute knowledge for components. In order to classified the mass knowledge scattered in different resources and ensures they can be easily retrieved, we use the hierarchy tree to coding the knowledge. The first layer classified the knowledge according to the components of car series. The second layer is used to match the knowledge with different design stages. And the third layer is about the several parts and auxiliary devices in detail. The architecture of knowledge organization be shown in Fig.3 and the hierarchy tree be shown in Fig.4.
Fig.3. The architecture of knowledge organization
Fig.4. The hierarchy knowledge tree of the automobile panel dies
Among them, the attribute code of first layer described by three words and two numbers, which is explained as follows (the number used to recognize the ones coded with same words):
J. Wen et al. / Application of Knowledge-Based Engineering
187
GB1E1: means die design knowledge about engine external panel of Germany car Benz. JT1T1: means die design knowledge about top panel from Japan automobile series Toyota. UF1W1: means die design knowledge about wheel cowling of US automobile series Ford. The attribute code of the second layer is described by two words and one number, which explained as follows. In this paper, we focus on the structure design of automobile die design. CS1: means the die design knowledge of structure design in concept design stage. DS2: means the die design knowledge of simulation analysis in detail design stage. 4. Reasoning processes and implementation Knowledge reasoning is the thinking process to solve the unknown problems by judgments deducing from the known knowledge and facts. The reasoning methods mainly include rule-based reasoning and case-based reasoning. The method of RBR is used for specific parameters and rules based on maturity theory and experienced designer. The CBR method is used for parts and components design similar with precedent based product knowledge templates and successful practice cases. This paper uses both the RBR and CBR to retrieve the needed knowledge. For the explicit, the RBR methodology is the best practice. And for the tacit knowledge, the CBR can be used to help find out the appropriate cases. This paper applies the CBR methodology to retrieve the suitable parts in the case library, along with their process plan, operation regulations and dies design rules. The basic concept of CBR is that in the problem solving process, when decision-makers are confronted with new problems, they can comprehensively use past experience and previously proposed modes and employ similarity for validation to help find the solutions to problems. In other words㧘 focusing on the cases of the past, further revision can be made to smoothly apply existing decision-making behavior. Case-based expression is mainly divided into two categories: description of the problem and storage of results. Data retrieval is generally undergone through the answering of questions. Hence, case-based reasoning extensively compares the similarities between the existing questions with the cases of the past, and takes the comparison as the foundation. This method can tremendously reduce the bottleneck that general faced in artificial intelligence technology when retrieving knowledge. Since AI technology is rather slow in collecting related cases, induction has to be done first, and then similarity is used for inference. The combined cases need not undergo the calculation of section properties. Finding the solution directly by using the regressed expression can enable faster calculation and relatively high engineering practicability. Also, the reasoning mechanism is used to inspect the input data for the design process. We deal with the required design information in the system and then give supported decisions and documents, models to the designer according to the reasoning rules. In this paper㧘it mainly includes the relationship between the parameters and IfThen rules. The rules, laws, and formulas are all the basic elements should be collected to constitute the inference engine.
188
J. Wen et al. / Application of Knowledge-Based Engineering
Fig.5. The retrieving mechanism of CBR methodology
The knowledge framework concept is combined with agents and applied to the multi-hierarchy engineering knowledge coding system. In this way, the procedures can be simplified, and codes can be rapidly combined to establish and analyze selected cases, achieving the effects of labor division, integration and knowledge accumulation. The CBR mechanism can be described in fig.5. If two cases have many common features, their similarity factor is high. Conversely, the similarity factor is low when few features are shared. The retrieval operation compares the query case with all cases in the case library and calculates similarity factors from each pair of comparisons. Based on comparison results, the case with highest factor is the most similar case. As the query case and selected case share many features, the process plans and die designs of the selected case can be reused in the new design. With the CBR, the KBE system has the advantages for assisting enterprises in preserving tacit knowledge of stamped parts. Fig.6 shows an example of the similarity measurement result determined through the algorithm in Equation 1, where n is the number of features attached to the parts, Ki is the similarity factor between Part-i and Part-j, and Wfi is the weighting factor. Through the weighting ratio and similarity sequence analysis, similar cases can be found.
Sim
n
¦ K Wf i
i
1
Fig.6. Illustration of similarities between each pair of three models
(1)
J. Wen et al. / Application of Knowledge-Based Engineering
189
5. Interface of the KBE system and knowledge-base It is very difficult to extract expertise from experts and to construct a knowledge-base from extracted knowledge. In this paper, the design knowledge is extracted from regulation book and existing regulation-base design programs as well as from human experts. The first step to bring the KBE technology into automobile die design is to acquire and edit the knowledge needed. By communicating with people involved in the product life cycle and edit their knowledge to come to a common solution, through negotiation and compromise. Sometimes, knowledge is to be developed other than interviewed. Since some knowledge is even out of the reach of human experts, then developers need to set up some empirical steps to capture the system behavior of the design process. For example, keep running the program of design Analysis and Objective Functions Assessment, change the input parameters and observe the changes of the outputs. The rules captured can be set up the guidelines to adjust the design modules [13]. The knowledge base is the most critical component of the whole system which is defined as a collection of experience, rules, cases, geometry parameters and other knowledge. In the design process of automobile panel die, knowledge including expert knowledge, expertise, experience, successful cases, and product design standards will be collected, distilled and summarized into several rules, formulas, and strategies by the knowledge engineers. All the useful knowledge are transferred to documents or database to constitute different types of knowledge bases, which can achieve storage, classification, validation, retrieval and update management of the knowledge. The interface of the database and the relationships between the knowledge bases can be described as in Fig.7. Other parts of the whole system will be showed in other articles.
Fig.7. The interface of the KBE system and the knowledge base
In this paper㧘the knowledge classified and stored based on the structure of the automobile die and the design process to form a knowledge tree, which is shown in Fig. 4. Thus, the knowledge bases can be easily expanded and modified to keep the knowledge up to date, and designers can get the latest information and reuse it conveniently. Because knowledge from different sources belong to different types including document, formulas, report etc, so it is necessary to choose several formats to
190
J. Wen et al. / Application of Knowledge-Based Engineering
represent different types of knowledge. For the design rules㧘we elicit it into formulas and reuse it by calculation modules. For the successful cases, and analysis reports, we compile in several formats (.txt, .doc and .xls) and store them in the bases for the designers to retrieve for reference. This study proposes the use of target structure dimensions to be tested together with the corresponding weights of parameters for the calculation of the similarity. By using Eq. (1), the results can be acquired quickly. After the calculated result is verified or revised, it becomes an effective case again, and is stored in the experienced case knowledgebase for users to subsequently retrieve, analyze, validate and use. In this way, codes can be rapidly combined to establish and analyze selected structure cases, and the entire categorization hierarchy can be simplified. Let each hierarchy of codes enter the detail catalog again so that the codes are too long. This can also achieve the effects of labor division and integration. It directly searches, validates and analyze. Moreover, it circularly establishes the case-based knowledge and engineering knowledge application framework. When there are more cases, the analytical results will be more accurate. Therefore, fast interpretation can be acquired, decreasing the amount of time spent on searching and in the design analysis of similar structures. 6. Conclusions and summary An effective decision support system is essential to provide workers with information necessary to identify the causes of a problem and take appropriate action to solve it. This study uses an overall knowledge framework, combining agents and coding to carry out engineering knowledge framework categorization for a wheel-cowling structure. Based on engineering knowledge and important parameters, this study integrates engineering knowledge coding and case-based similarity analysis. Along with the operation of different hierarchy knowledge, the established engineering knowledge coding of the wheel-cowling structure and the case-based similarity inference method combine engineering knowledge coding with case-based similarity analysis to carry out searching and inference. This method can serve as the foundation for extended application to other kinds of structures. The case study presented in this paper is an attempt to solve some of the problems by transforming knowledge acquisition into a multi-level information sharing system that benefits directly to all its users. One concern is to provide it with the ability to communicate efficiently and transparently during diverse design stages. This system not only just save design time drastically by means of controlling the workflow and the application of the KBE methodology, but also could provide help, analysis suggestion and explanations to the engineer when required through the case similarity, thus the designers no doubt can be well trained and make the proper decisions quickly. Thus by using the KBE system the calculation and analysis department can responded much more quickly to design scheme changes imposed by other departments. It also means that the investigation of choosing new parameters to optimize the design is much easier and quicker to judge the results. Whilst the system is designed specific to the automobile die, the methodology used is generic and can be applied to other product where scattered knowledge could be reused effectively. To stay advantage and win contracts in the competitive automobile die manufacturing market, we need to design products in a smart way. The presented work
J. Wen et al. / Application of Knowledge-Based Engineering
191
contributes to the methodology of how KBE can be used in the design process of automobile panel die. Acknowledgements This work was undertaken as part of a sponsored research program under the National High Technology Research and Development Program of China (Grant No. 2013AA040605), and supported by National Science and Technology Supporting Program (Grant No. 2012BAF12B05). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
Rocca, G.L., Knowledge based engineering: Between AI and CAD. Review of a language based technology to support engineering design. Advanced Engineering Informatics, 2012. 26(2): p. 159-179. Verhagen, W.J.C., et al., A critical review of Knowledge-Based Engineering: An identification of research challenges. Advanced Engineering Informatics, 2012. 26(1): p. 5-15. Baxter, D., et al., An engineering design knowledge reuse methodology using process modelling. Research in Engineering Design, 2007. 18(1): p. 37-48. Chapman, C.B. and M. Pinfold, The application of a knowledge based engineering approach to the rapid design and analysis of an automotive structure. Advances in Engineering Software, 2001. 32(12): p. 903-912. Yang, H.Z., et al., Implementation of knowledge-based engineering methodology in ship structural design. Computer-Aided Design, 2012. 44(3): p. 196-202. Heinz, A., 777 rule based design: integrated fuselage system, in International ICAD users group conference proceeding 1996. Milton, N.R., Knowledge Acquisition in Practice: A Step-By-Step Guide. 2007: Springer Publishing Company, Incorporated. 176. Wielinga, B.J., Reflections on 25+ years of knowledge acquisition. International Journal of HumanComputer Studies, 2013. 71(2): p. 211-215. Kingston, J.K.C., Designing knowledge based systems: the CommonKADS design model. KnowledgeBased Systems, 1998. 11(5–6): p. 311-319. Wu, Y.-H. and H.-J. Shaw, Document based knowledge base engineering method for ship basic design. Ocean Engineering, 2011. 38(13): p. 1508-1521. Helvacioglu, S. and M. Insel, An expert system approach to container ship layout design. International Shipbuilding Progress, 2003. 50(1-2): p. 19-34. Milton, N., Knowledge Technologies. 2008: Polimetrica. Singh, N., et al., A knowledge engineering framework for rapid design. Computers & Industrial Engineering, 1997. 33(1–2): p. 345-348
192
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-192
Knowledge Object - a Concept for Task Modelling Supporting Design Automation a
Fredrik ELGHa,1, Joel JOHANSSONa School of Engineering, Jönköping University, Sweden
Abstract. The ability to design and manufacture highly customer adapted products brings a competitive edge to manufacturing companies acting on a business-tobusiness market as suppliers to OEMs. A vital means for success in quotation and order preparation is advanced system support in design, process planning and cost estimation based upon the automation of engineering tasks. A design automation system encapsulates these tasks which are to be executed for specific customer specifications in a sequence specified either by a predefined order or resolved by an inference mechanism in run-time. Commonly, the development of a design automation system is an iterative process alternating between a top-down and a bottom-up approaches. An overall strategy is a necessity for successful system development, however, to successfully define the tasks, retrace all the necessary knowledge and to close gaps in both the tasks and the knowledge definitions require a complete and detailed understanding of the specific domain. In this paper, the concept of Knowledge Object is described together with examples of its use in both the development and system realization of design automation systems enabling product customization. The concept has shown to be useful for modelling of design processes, tasks, and engineering knowledge as well as in system development and realization. It also supports traceability and understanding by relations to other concepts describing associated requirements and design rational. Keywords. Customized Products, Knowledge Object, Design Modelling
Introduction The research in the field of design automation has mainly adopted an artefact oriented approach supported by the evolution in CAD software, i.e. the rules have been defined and organized in accordance to a product structure. This has been further supported by the different commercial KBE tools available today for modelling of design knowledge (e.g. CATIA KWA and Siemens PLM NX Knowledge Fusion). The process approach, on the other hand, has gained more success in the area of computing, where engineering tasks defined in different applications are connected for the purpose of simulation and optimization (e.g. ModeFrontier and Simulia Isigth). Two specific areas that have been subject for research are the development process of systems for customization of products and the modelling of product related information and knowledge supporting system realization. Hvam et al [1] describes a complete and detailed methodology for constructing configuration systems in industrial and service oriented companies. An iterative process is suggested including: analysis of product portfolio, object-oriented modelling, object-oriented design and programming. Every activity results in a description of the problem domain with different levels of abstraction and formalization. Two strategies are proposed for system documentation, either by using a product variant master and associated CRC (Class Relationship 1 Corresponding Author.Fredrik Elgh, School of Engineering, Jönköping University, P.O.Box 1026, 551 11 Jönköping, Sweden; e-mail:
[email protected].
F. Elgh and J. Johansson / Knowledge Object
193
Collaboration) cards or by using the class diagram of a formal model and associated CRC-cards. The original content and structure of the CRC-cards have been further developed by Haug and Hvam [2]. Haug et al [3] have developed a prototype system for the documentation of the CRC-cards, the product variant master and the class diagrams. A procedure for development of design automation systems has been outlined by Rask [4] where issues about documentation and maintenance are addressed by emphasizing the need and importance of routines regarding versioning, verification and traceability. A possible means to support the updating of the knowledge-base proposed by Rask et al [5] is to strive for a design automation system implementation that allows the revision and the documentation to be executed at system runtime. Stokes [6] describes a methodology for the development of knowledge based engineering applications, MOKA, Methodology and software tools Oriented to Knowledge Based Engineering Applications. Two central parts of the methodology are the Informal and Formal models. The Informal model is used to document and structure knowledge elicited from experts, handbooks, protocols, literature etc. The Formal model is derived from the Informal model with the purpose to support system specification and programming. It can be concluded that there is a lack of a comprehensive and detailed methodology for design automation development and realization supporting a process approach. The need of a methodology has been identified in projects executed in close collaboration with industry and the objective of this work is to bring together, structure and further expand actions and experiences in that direction with the purpose of building a methodology. The starting-point in industrial problems follows problembased research as described by Blessing’s research methodology for the development of design support [7]. The system development method [8] has been deployed as research methodology for the purpose to explore the research issue including the introduction, evaluation, and refinement of new concepts which, in turn, are perceived as prescriptive models in accordance with the design modelling approach [9]. 1. Knowledge Object The concept of Knowledge Object is at the core of the methodology described in this work. The concept was initially introduced by Elgh and Cederfeldt [10,11], Fig 1. Module
Graphical User Interface
Object
Object
CAD models
Module
Module
Geometry models
Module
Module
Object
Object
Object
Process plans
Cost estimation
Product design
Figure 1. A system architecture based on Knowledge Objects introduced by Elgh and Cederfeldt [10,11].
They described a system for automated design, process planning and cost estimation of bulkhead in a submarine escape section. The knowledge enabling the automation of these activities was captured in Knowledge Objects grouped in modules. The concept was later adopted by Johansson [12] in the development of a system for automated design of toolsets for the rotary draw bending of aluminium tubes, Fig 2.
194
F. Elgh and J. Johansson / Knowledge Object
Figure 2. The use of the Knowledge Object concept in the work presented by Johansson [12].
In its simplest form, a Knowledge Object transforms input to output and contains a list of input parameters, a list of output parameters, and a method for processing input parameters to output parameters (Fig. 3). Other fields may be added such as constraints, owner, categories, precision, and comments. Owner is used to trace who is responsible for the Knowledge Object and its method (the task it performs). The field categories can be used to sort Knowledge Objects into groups. Comments are used to add information usable for explanation extractions and debugging facilities. Finally, the list of constraints and the precision value is used to allow the knowledge-bases to contain alternative Knowledge Objects.
Figure 3. Knowledge Object model [13].
Knowledge Objects use external applications as methods. To pass information to the external applications, meta-data is needed (not to be confused with metaknowledge). That meta-data is locally stored within the Knowledge Objects. The calculated parameter values are stored in a global list of parameters. When implementing the Knowledge Objects, they should be defined in a way that makes them autonomous. Since the methods used to process information preferably are external software applications, the applications should be selected keeping in mind the list of requirements imposed on the design automation system. They are the following: low effort of developing, user readable and understandable knowledge, longevity, and ease of use [10]. The benefits of developing Knowledge Objects that are autonomous using common wide-spread applications as methods are two-fold: the knowledge can be used manually without the design automation system, and it is easy to find people skilled enough to use the very same knowledge the design automation system does - it makes the knowledge more human-readable
F. Elgh and J. Johansson / Knowledge Object
195
2. Design Modelling with Knowledge Objects The main activities for the development of an automated design system are: system output definition, customer parameters definition, product items (parts and assemblies) definition, definition of variables associated with the product items, company parameters definition, process modelling, acquisition of knowledge used in the design process, analysis of relationships at different levels, identification of problems and knowledge gaps, resolving problems and filling knowledge gaps, definition of design tasks that will constitute system embedded knowledge, system realization, and system test and evaluation. These activates are commonly performed iteratively in the pursuit of a complete solution. The work includes the definition of design algorithms, rules, and relations that transform stakeholder’s parameters to product variables (e.g. properties and specifications) which results in a process structure with associated knowledge. Process formalization can be achieved by means of process modelling of the involved tasks and their relations using Dependency Structure Matrices. The tasks can be conditional dependent, or can be executed in parallel whilst others require methods to resolve mutual dependencies. The definition of Knowledge Objects cannot be based on identified tasks exclusively as they in turn can contain, or conceal, dependencies that aggravate system execution. To unwind, or at least to reveal, dependencies requires a detailed analysis on parameter level. This analysis is then followed by an elimination of all recursive dependencies, which in turn can affect the grouping of tasks and, consequently, the definition of Knowledge Objects. This alternation between domains, for revealing and, hopefully, unwinding dependencies, supporting the formation of Knowledge Objects, is depicted in Fig. 4.
Figure 4. Definition of Knowledge Objects by means of Dependency Structure Matrices [14].
2.1. Supporting top-down and bottom-up modelling The previously described system development activities can be performed in an arbitrary order within an iterative process. Adopting a top-down approach would imply that a complete system is defined using Items, Knowledge Objects, and Parameters without considering what really happens within the different Knowledge Objects. The Knowledge Objects would be treated as black boxes, implicitly connected by Parameters for input and output that constitutes a network for the dataflow. When the
196
F. Elgh and J. Johansson / Knowledge Object
overall structure is complete, the work with creating the realization of individual Knowledge Objects takes place. On the other hand, a bottom-up approach would start with the creation of the individual Knowledge Objects’ realization (e.g. files for computing product dimensions and properties). When these realizations have been created, they are incorporated into the overall system and the relations for the dataflow are created by assigning Parameters as input and output. A mapping table supports the use of different names in the realization of a Knowledge Object to the ones used in the system. This is a functionality that supports a bottom-up approach allowing for the use of existing files without modifying it according to a pre-defined naming convention. 2.2. Modelling overlapping knowledge It is often possible to calculate a single variable in different ways. Sometimes a heuristic rule can be used, or rules analytically derived from the fundamental laws of physics. But it is also possible to do FEM-calculations or experiments to evaluate a design variable. In addition to these four types of knowledge, an engineer needs to have the capability to decide when to use what knowledge; this is called meta-knowledge, or knowledge about knowledge. When more than one type of knowledge source is available, the question of when to use which source arises. In one state, the system may be executed in order to make a quotation calculation with only a small set of available input parameters. In the next step, detailed design is the purpose of running the system, with high accuracy as the main focus and with a larger set of available input parameters. Different kinds of knowledge are used in these different contexts, and implementing meta-knowledge would allow for flexible use. A list of constraints and a precision value need to be added to the knowledge object class in order to make it possible to have multiple knowledge objects pointing to the same parameter in the knowledge-base. The constraints dictate when the knowledge object is applicable, and the precision value tells how good the outputs are. When using constraints together with precision values, the system will run in the following sequence. Do until the conflict set is empty: 1. List all triggered objects not violating any constraints, exclude solved objects. Sort the list by precision. 2. Execute the knowledge object with highest precision (“first come, first served” if several objects with the same precision exist). 3. Clear all output parameters in knowledge objects dependent on the outputs from the fired knowledge object. (This can cause rules to fire more than once, but is done to make sure that the output with the highest precision is the final result.)
Two different situations may occur when allowing the knowledge-base to contain multiple Knowledge Objects for a single phenomenon. Let us say that there are four triggered Knowledge Objects at one stage in the conflict set. Three of the Knowledge Objects contain knowledge about phenomenon P1, and one Knowledge Object deals with phenomenon P2. Knowledge Object number one and two have been assigned a high precision value, Knowledge Object number three a medium precision, and number four a low precision value. The design parameters A-D and I are known, but J-N are unknown. The selected Knowledge Object to run in this situation will be Knowledge Object number one. This is because it has the highest precision value and was added to the system before Knowledge Object number two. (Since Knowledge Objects one and two have the same precision value, the “first come, first served” rule is applied.) Situation one: The parameter calculated by a knowledge object is assigned a precision value equal to or higher than the precision value of knowledge object number
F. Elgh and J. Johansson / Knowledge Object
197
one. Since knowledge object number one is selected, it will be executed using the predefined method. Values for parameters I-K will be calculated. However, since a knowledge object with a precision value equal to or higher than the precision value of the current knowledge object (knowledge object number one) set the I parameter, the value of parameter I will not be overwritten. Knowledge object number one will set only the parameters J and K in this situation. When updating the conflict set, knowledge objects 1, 3 and 4 will be considered solved since parameters I to K are known. The only Knowledge Object left in the conflict set is number two. Situation two: The parameter calculated by a knowledge object is assigned a precision value smaller than the precision value of knowledge object number one. Since knowledge object number one is selected, it will be executed using the predefined method. Values for parameters I-K will be calculated. Since a knowledge object with a precision value smaller than the precision value of the current knowledge object (knowledge object number one) set the I parameter, the value of parameter I will be overwritten. All the parameters I-K will be set by knowledge object number one in this situation. Since parameter I is changed, all dependent parameters must be cleared. When searching the knowledge-base, it is found that knowledge objects 5 and 6 have parameter I as input. This will invalidate parameters O-T, since they were calculated using the value of the I parameter with a smaller precision. When invalidating parameters O-T, knowledge objects number 5 and 6 will be triggered and put to the conflict set. Knowledge objects 1, 3 and 4 are considered solved. At this stage, there are still four knowledge objects in the conflict set due to that the value of the I parameter has a higher precision. Firing knowledge objects 5 and 6 using this better value will (probably) increase the precision of the values of parameters O-T. 2.3. Managing Knowledge Objects Knowledge Objects implements computations, actions, consequences and relations but they do not encapsulate the argumentation for their existence or the reason behind their design. The definitions of rules are based upon insights, decisions or facts derived from prerequisites, trial and error, experience, calculations, simulations, experiments, filed tests, literature etc. which constitutes another kind of knowledge that can provide a deeper understanding of the Knowledge Objects. A deeper understanding can be supported by the access to answers for questions, such as Why, When, Scope, Valid ranges of input/output, Origin, Supporting theories, Simplifications, Assumptions etc. The answers to these questions constitute knowledge about knowledge i.e. MetaKnowledge or, as commonly referred to, design rationale defined as the set of reasons behind the decisions made during the design of an artefact. Two different approaches to represent design rational are Argumentation-based and Template-based [15]. In addition, traceability, defined as “…the ability to describe and follow the life of a conceptual or physical artifact.” [16], across domains is also essential. Product Variant Master (PVM) [1], MOKA [6], Systems Modelling Language (SysML) [17] and CommonKADS [18] are methodologies for system development with some support for managing design rationale and traceability. Specific applications, with more or less functionality for managing design rationale and traceability, are PCPACK [19], Design Rationale Editor (DRed) [20] and Product Model Manager (PMM) [21]. Three different tasks have been identified as essential to support and these are reuse, expansion and maintenance. Reuse is the use of existing Knowledge Objects in a new context (e.g. a new product family or system foundation). Expansion implies
198
F. Elgh and J. Johansson / Knowledge Object
increasing the design space or functionality (e.g. scale the parameters’ ranges or extend the topology). Maintenance concerns modifying existing Knowledge Objects according to new circumstances (e.g. changes in manufacturing constraints, material properties, manufacturing processes, legislations, standards etc.). In addition to the domains and the tasks identified above, three general enablers for successful task execution are the structuring, the validation and the adaptation of model elements. Structuring is required for the purpose of enabling searching and finding candidate Knowledge Objects for reuse, expansion or maintenance. Validation is required to ensure the applicability of candidate Knowledge Objects. Adaptation is necessary when changes are required to make the selected Knowledge Objects applicable in a new context. The structuring of design rationale concerning Knowledge Objects is based upon the information model in Fig. 5. The main principle is to sub-divide the process into different tasks, i.e. Knowledge Objects, on a level that supports both a contextual meaning and the access to detailed descriptions. The Rational class can be used to group objects and to describe why a Knowledge Object exists, what it operates upon, what it is affected by, its relation to other objects, or in detail describe the set of Input, the set of Output, and the transformation associated with a specific Knowledge Object. The SupportingObject enables traceability to reports, protocols, guide-lines, standards, legalizations etc. for more detailed descriptions. Item
*
Name ...
1..*
1..*
*
Knowledge Object
Material
Property
Feature
...
Name ...
Name ...
Name ...
*
SupportingObject Id Name Location Type ...
1..*
1..*
*
Parameter
Entity
Name ...
...
1..*
*
Rationale
1..* 1..*
Argumentation ...
1..*
1..*
Figure 5. Information model for structuring design rational of Knowledge Objects, adapted from [22].
3. Debugging and execution of Knowledge Objects Debugging a collection of Knowledge Objects implies, on a meta-level, that all the necessary data and their relations constituting a domain of application have been entered. An iterative process, based on the alternation between top-down and bottomup approaches, conducted under a period of time, together with the deployment of different domain knowledge and the involvement of different persons in the system development, is not easy to manage and support is required for assessing system completeness. Means to reach a total system understanding and tools for detailed examination of data and relations are two important functions supporting system development. These functions can be provided by DSM-views on the system defined data. System defined Items, KnowledgeObjects and Parameters can be retraced and their internal relations analysed by the generation of DSM-views for the different concepts, Fig.6. Further, the concepts’ interrelationships can be view and analysed by the construction of DSM-views visualising the mapping between the concepts.
199
F. Elgh and J. Johansson / Knowledge Object
However, even though all the required Items, KnowledgeObjects and Parameters have been entered into the database, the system might fail to execute. To ensure the system functionality, the database has to be checked for recursive dependencies, undefined parameters (variables), the existence of multiple providers for a parameter (variable), and the existence of knowledge objects not providing any output. These basic data checks can be performed and their status communicated to the developers continuously during system development. If any problem exists, the incorporation of the different DSM-views, both original and partitioned, can be used as an aid to examine the problem in detail (Fig. 7). Items (I1..i)
Items (I1..i)
…
Item 8
Item 6
Item 5
Item 4
Item 7
Item 1
X
X
Item 3
Item 4
Item 3
Item 2
X
Item 1 Item 2
X
Item 7
X X
X
Item 5
X X
Item 6
X
Item 8
X
X
X
X
X
X
X
X
X X
… Knowledge Objects (K1..j)
X
X
X
X
X
X
X
…
X X
X
X X
X X
X X
X
X X
Para 6
X
…
X
X
Item 8
X
Item 6
X
X
Item 5
X
X
Para 4 Para 5
Item 4
X
Para 8
…
X
Item 7
Para 7
Item 1
X
X
X
X
Para 3
X X
Para 6 X
X
X
Item 3
Parameters (P1..k)
X
X
Item 2
X
X
Para 2
X X
X
…
X
X
KO 8
X
X
…
KO 7
X X
Para 5
KO 6
X
Para 7 Para 8
X
Para 1
X X
Para 4 X
KO 5
Para 6
X
KO 4
X
X
Para 3 X
KO 3
Para 2 X
KO 2
KO 1
…
Para 8
Para 7
Para 6
Para 5
Para 4
Para 3
Para 2
Para 1 Para 5
X
KO 8
Mapping
X
Para 1 X
Para 3
X
KO 7
Mapping
Para 1 Para 2
X
…
Parameters (P1..k)
Para 4
X X
X
KO 6 X
…
X X
KO 5
Item 8
X
KO 7 KO 8
X X
KO 4 X
Item 6
KO 6
X
Item 5
X
X
KO 3 X
Item 4
X
Item 7
KO 5
Item 1 KO 2
X
KO 3
Item 3
KO 1
KO 1 KO 2
KO 4
Item 2
…
KO 8
KO 7
KO 6
KO 5
KO 4
KO 3
KO 2
KO 1
Knowledge Objects (K1..j)
Mapping
X
Para 7
X
Para 8
X
X
X
X
X
X
X
X
X X
…
Figure 6. Principle DSM-views of concepts and the mappings between them [23].
3.1. Execution of Knowledge Objects A collection of Knowledge Objects can executed by a predefined static flow, a run-time generated static flow or with a dynamic flow. A predefined static flow is stored or manually entered prior to execution which requires an understanding of how Knowledge Objects and Parameters are related, however, this can require extensive work and has to be redone if there has been any changes. A run-time generated static flow can be achieved by using an inference mechanism. The relations between Parameters can be modelled as directed graphs and the execution order resolved using algorithms for creating and operating on a reachability matrix. The order is then mapped to the Knowledge Objects. Since the execution sequence of the system is not fixed, the execution order can change whenever new knowledge is introduced. A dynamic flow can be achieved by forward-changing. A global list of parameters is defined and watched by all Knowledge Objects. Whenever a Knowledge Object has enough information to solve its task, it will ask the inference engine to be allowed to calculate some of the unknowns in the global list of parameters. Based on metaknowledge about the Knowledge Object (e.g. low precision), it might be rejected. If approved to perform the task, it will write the new information in the global list of parameters, which might cause other Knowledge Object to be invoked. The execution will continue until no more Knowledge Object is to be executed. The execution can be as a single-run based upon a specification with values for a set of initial Parameters with the purpose the find out which Parameters are affected
200
F. Elgh and J. Johansson / Knowledge Object
and their resulting values. Multiple-runs, preferably supported by Design of Experiments (DoE), enable executions based on different combinations of Parameters and their values which will support the pursue for a valid or “best” solution. Multiple runs can also be used to create response surfaces or trade-off curves that can be distributed for easy use if access to the system is limited or execution includes substantial simulations. If the Knowledge Objects can be executed without interruption, the search for a best solution can be supported by introducing optimization algorithms.
Figure 7. A graphical user interface for ensuring system completeness and functionality [23].
4. System realization based on Knowledge Objects One example of an information model incorporating the concept is illustrated in Fig. 8 [24]. The main concepts used for knowledge modelling and representation are: KnowledgeObject, Variable and KnowledgeObj_Parameter. The different attributes of the KnowledgeObject concept are: Id, Name, Parameter (commonly a path to a file to be executed), KnowledgeObjectTypeFK (pointing at the concept KnowledgeObjType which identifies the software application for execution), KnowledgeBaseId (pointing at a concept defining the superior domain of application) and KnowledgeObjectType (used for classification). One special class is Specification comprising KnowledgeObjects not requiring any input. Variable is a central concept for the proposed model. In general terms, a Variable is a property defined by a task, i.e. a KnowledgeObject. Variable can represent different types of properties related to, by example, geometry, material, product structure, manufacturing operations, cost levels etc. The different attributes of the Variable concept are: Id, FriendlyName (can easily be interpreted), DefinedBy (pointing at a KnowledgeObject), NameInDefinedBy (supports the use of a different name in the realization of a KnowledgeObject) and Dimensions (for type declaration). The execution of a KnowledgeObject commonly requires input (parameters). Parameters are commonly defined by other KnowledgeObjects. The required input for a KnowledgeObject is defined by the concept KnowledgeObj_Parameter which includes the attributes Id,
201
F. Elgh and J. Johansson / Knowledge Object
KnowledgeObjectFK (defines the KnowledgeObject), VariableFK (defines the Variable) and ForeignParamName (supports the usage of a different name in the realization of a KnowledgeObject). The above implies that no explicit concept for parameters is needed. VariableValue
Project
1..*
PK
PK
Id
FK1
KnowledgeBaseId Name
1..*
FK2
*
FK1
Variable_GenericItem 1..*
Id TheValue VariableFK Type ProjectId
1..*
PK
Id
FK1 FK2
VariableId GenericItemId
1..*
KnowledgeObjFired PK
Id
FK1 FK2
KnowledgeObjId ProjectId
Variable PK 1..*
KnowledgeObject PK
KnowledgeObjType PK
Id
Id 1..*
FK2 FK1
FriendlyName DefinedBy NameIndefinedBy Dimensions
FK1
Id Name Parameter KnowledgeObjTypeFK KnowlwdgeBaseId KnowledgeObjType
1..*
1..*
KnowledgeObjType
GenericItem 1..*
1..*
KnowledgeObj _Parameter KnowledgeBase PK
PK
Id
FK1 FK2
KnowledgeObjFK VariableFK ForeignParamName
PK
Id
FK2
KnowledgeBaseId Name Description Parent MinCount MaxCount VariableDefiningCount
1..*
Id Name Image
FK1
1..*
Figure 8. First example of information model for system realization [24].
A second example of an information model incorporating the concept is illustrated in Fig. 9. The main classes in that model is the KnowledgeObject, KnowlegeParameter, KnowledgeConstraint and Connection. Also, two classes CollectionOfKnowledgeConstraints and CollectionOfKnowledgeParameters were introduced to make it possible to trap and consume events raised when KnowledgeParameter values and KnowledgeObject statuses changed. A Knowledge Object in this implementation has 11 attributes, 4 methods, and 2 events. The Active attribute is used to indicate that the Knowledge Object is active or suppressed, Categories is used to group knowledge objects, Comments is used to add descriptions of the knowledge automated by the current Knowledge Object, ConstraintsSatisfied is a read only attribute indicating whether the Knowledge Object is valid based on given constraints, Cost indicates how much time it takes to apply the underlying method, Folder is used to group the Knowledge Objects, Name is used as identifier of the Knowledge Object, Owner is used to indicate whom is responsible for the automated knowledge and its applicability, Precision is used to indicate the quality of the automated knowledge, Triggered is a read only attribute indicating whether the knowledge object is ready to execute, Type is used to categorize the Knowledge Object. The ClearOutputs method is used to set any value of the indicated output parameters for the Knowledge Object to null, Clone is used to make an in-depth copy of the Knowledge Object, Execute is used to execute the Knowledge Object if active, triggered and constraints are satisfied, OnObjectChanged is used to invoke update of the knowledge base. The Executing event is used to flag that the Knowledge Object is currently executing its method, and the ObjectChanged is used to indicate that the knowledge base has to be updated. A KnowledgeParameter has 9 attributes, of which 4 are used the same way as for the Knowledge Objects. The Groups attribute is used to group the knowledge parameter, Locked is used to lock the value of the knowledge parameter in case of an optimization algorithm, ProductOf is used to indicate what Knowledge Object set the value of the knowledge parameter, Unit is used to identify the metric unit of the current value of the knowledge parameter and Value carries the
202
F. Elgh and J. Johansson / Knowledge Object
value object of the knowledge parameter. A KnowledgeConstraint has 6 attribute of which 3 are used in a similar way as for the Knowledge Objects and knowledge parameters. The Expression attribute is used to carry a string-based formula expressing the constraint, Parameters is used to indicate which parameters are involved in the expression (these parameters will also be marked as input to any knowledge object the constraint is added to) and Satisfied is a read only attribute indicating whether the constraint is true. A Connection has 5 attributes, ClassName is used to identify the class the knowledge object use for execution, ConnectionFile indicates where the class is stored, and Method indicates what method to run within the indicated class.
Figure 9. Second example of information model for system realization.
5. Conclusion The concept of Knowledge Object has been described together with examples of its use in both development and system realization of design automation systems. The concept is useful for modelling and execution of design processes, tasks, and engineering knowledge. Support for traceability and understanding by mapping to other concepts describing requirements and design rational is also included for management purpose. To further improve and validate the concept, development of additional systems targeting other domains are required - this will be subject for future work. Acknowledgement This work was conducted within the project Efficient Implementation and Management of Systems for Design and Manufacture of Custom Engineered Products – Impact, and financial support from the Knowledge Foundation, Sweden, is gratefully acknowledged.
F. Elgh and J. Johansson / Knowledge Object
203
References [1] L. Hvam, N.H. Mortensen, J. Riis, Product Customization, Springer Verlag, Berlin, Germany, 2008. [2] A. Haug, L. Hvam, CRC-cards for the development and maintenance of product configuration systems, in: Joint Conference IMCM06 and PETO06, GITO-Verlag, Berlin (2006) 369-381. [3] A. Haug, A. Degn, B. Poulsen, L. Hvam, Creating a documentation system to support the development and maintenance of product configuration systems, in: WSEAS International Conference on Computer Engineering and Applications, Gold Coast (2007) 122-131. [4] I. Rask, Rule-based product development - Report 1, Industrial Research and Development Corporation, Mölndal, 1998. [5] I. Rask, S. Sunnersjö, R. Amen, Knowledge based IT-systems for product realization, Industrial research and development corporation, Mölndal, 2000. [6] M. Stokes, Managing Engineering Knowledge – MOKA, Prof Eng Publications ltd, London, UK, 2001. [7] L. Blessing, A Process-based approach to computer-supported engineering design, PhD thesis, University of Twente, Twente, 1994. [8] A. Duffy, M.M. Andreasen, Enhancing the evolution of design science, Proceedings of Conference on Engineering Design 1 (1995), 29-35. [9] F. Burstein, System development in information systems research, in: K. Williamson (Ed.), Research methods for students, academics and professionals – information management and systems, Centre for Information Studies, Wagga Wagga, (2002) 147-158. [10] F. Elgh, M. Cederfeldt, A design automation system supporting design for cost – underlying method, system applicability and user experiences, in: M.W. Sobolewski, P. Ghodous (Eds.), Next Generation Concurrent Engineering - Smart and Concurrent Integration of Product Data, Services, and Control Strategies, Springer Verlag, Berlin, (2005) 619-627. [11] F. Elgh, M. Cederfeldt, (2007), Concurrent cost estimation as a tool for enhanced producibility – system development and applicability for producibility studies, Journal of Production Economics 109 (1-2) (2007), 12-26. [12] J. Johansson, A flexible design automation system for toolsets for the rotary draw bending of aluminium tubes, Proceedings of IDETC/CIE 2007, ASME, New York, (2007). [13] J. Johansson, Automated computer systems for manufacturability analyses and tooling design: applied to the rotary draw bending process, PhD thesis at Chalmers University of Technology, Gothenburg, 2011. [14] F. Elgh, Decision support in the quotation process of engineered-to-order products. Advanced Engineering Informatics 26(1) (2012), 66-79. [15] A. Tan, Y. Jin, J. Han, A Rational-based Architecture Model for Design Traceability and Reasoning, The Journal of Systems and Software 80 (2007), 918-934. [16] K. Moham, B. Ramesh, Traceability-based Knowledge Integration in Group Decision and Negotiation Activities, Decision Support Systems 43 (2007), 968-989. [17] S. Friedenthal, A. Moore, A. Steiner, A Practical Guide to SysML: the Systems Modeling Language, Morgan Kaufmann, San Francisco, US, 2008. [18] G. Schreiber, H. Akkermans, A. Anjewierden, R. Hoog, N. Shadbolt, W. Velde, Knowledge Engineering and Management: The CommonKADS Methodology, The MIT Press, Cambridge, US, 2000. [19] Epistemics, PCPACK, http://www.epistemics.co.uk/Notes/55-0-0.htm (Acc. 24 June 2014), 2008. [20] R. Bracewell, K. Wallace, M. Moss, D. Knott, Capturing Design Rationale, Computer Aided Design 41(3), (2009), 173-186. [21] A. Haug, L. Hvam, N.H. Mortensen, Implementation of Conceptual Product Models into Configurators: From Months to Minutes, Proceedings of MCPC 2009 (2009). [22] F. Elgh, Modeling and management of product knowledge in an engineer-to-order business model, in: S.J. Culley, B.J. Hicks, T.C. McAloone, T.J. Howard, P. Badke-Schaub, (Eds.), Proceedings of the 18th International Conference on Engineering Design (ICED 11), The Design Society, Somerset, (2011), 86-95. [23] F. Elgh, Knowledge modelling and analysis in design automation systems for product configuration, in: A. Dagman, R. Söderberg (Eds.), Proceedings of Norddesign 2010, Chalmers University of Technology, Gothenburg, (2010) 257-266. [24] F. Elgh, A tool for automated design supporting management and analysis of quotations and product variants – information model and system principles, in: J. Pokojski, S. Fukuda, J. SalwiĔski (Eds.), New World Situation: New Directions in Concurrent Engineering, Springer, London, (2010), 361-368.
204
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-204
Design Rationale Management – a Proposed Cloud Solution Joel JOHANSSONa,1, Morteza POORKIANYa and Fredrik ELGHa a Mechanical Engineering, School of Engineering, Jönköping University, Sweden
Abstract. Due to increasing complexity of modern products it is many times impossible for single individual engineers to fully grasp the product they are a part of developing. Valuable time during the product development is therefore spent searching for knowledge about different aspect of the product. To enable engineers finding right knowledge in different situations, the knowledge must first of all exist. Secondly, it needs to be structured and thirdly, it needs to be accessible. In this paper all of these three aspects of design rationale (reasons for why the product is designed the way it is) are addressed with the main focus on the latter one, accessibility. An information model is presented that can be used to structure the design rationale. It also presents a schematic overview of how a cloud solution could be realized using the information model to make a complete system for instantly capturing, filtering and accessing design rationale in a contextual manner. To enable the instant and contextual capture, filtering and access of the design rationale, the design rationale management systems should be present to the engineers everywhere in the digital environment, ready for service. It should also include functions that make the design rationale shared to all privileged users making sure everyone has updated versions of the stored knowledge. In this work the main ideas of a method for instant and contextual capture, filtering and access of the design rationale are introduced and a pilot system described as a proof of concept. The pilot system can be used to capture, filter and access design rationale across and within text-documents, spread sheets and CAD-models. Keywords. Design Rationale, Product Development, Knowledge Management, Design Knowledge Reuse, Information Retrieval
Introduction Manufacturing companies fight on many fronts. On one side, intense competition on the global markets forces manufacturing companies to continuously cut costs, lead time and improve their product development processes by finding shortcuts and better ways of ensuring function and quality at right levels of their products. On the other side the products continually become more and more complex to meet new needs from customers. In short words the manufacturing companies fight against cost, time and complexity. This has led to the development and use of computer systems intended to take care of many aspects of product design, product development and information management. However, the focus has mainly been on moving the process forward and support for recording reasons for decisions have gained less attention. In order to design a new or redesign an existing artefact, access to previous design projects can be highly valuable 1
Corresponding Author.
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
205
for finding possible existing and proven solutions. Access to the design rationale of the previous design projects would add even more value since it would provide the engineers with the reasons how the previous design was developed and not least why. It would also indicate solution strategies that was tried but failed (often referred to as lessons learned). It has been found by the authors that recording design rationale throughout the design process is not common among engineers, often due to lack of time, lack of supported tools or other job priorities. Another problem is that even if the knowledge is captured, the designers might not be aware of all sources of information [1]. As a consequence, support for improving the process of capturing design rationale and then make it easy to instantly access it is required. It would then be possible for the engineers to create, update, retrieve and reuse the design rationale during developing a new or redesigning a product variant. Different types of design knowledge is needed depending on the context of working [2, 3]. During design process, the designer uses and creates several sources and types of information such as product specifications, process specifications, bill of material, design tables, rules, calculations, geometrical models, and features. Because of this diversity, the knowledge is collected and stored in different repositories and formats e.g. spread sheets, text document, images, or CAD-models. As such, the engineers need to work in different contexts and use different tools to manipulate the information and its relations. For example, when there is a need to describe and explain a design alternative in a CAD model, the designer can refer to a table in a spread sheet, or a text or picture in a text-document. The research work presented in this paper targets the product development process where product design as well as the design and manufacture of the tools and equipment required for production is carried out in paralell. The aim of the research is to enable instant capturing, sharing and accessing design rationale within different design tools during the design process in order to support the design process, the redesign of an existing solution or the development of new artefact using existing solutions and knowledge to speed up process and/or cut costs. In this work the main ideas of a method for instant and contextual capture, filtering and access of the design rationale are introduced and a pilot system described as a proof of concept. The system allows the users to exchange design rationale from three different digital contexts: SolidWorks, Microsoft Excel, and Microsoft Word.
1. Design knowledge and design rationale 1.1. Design Knowledge Design knowledge can be categorized either as product knowledge or as process knowledge [2]. The former describes the function and behaviour of the product whereas the latter focuses on the way solutions are created. Usually, knowledge management systems aim to provide efficient reuse of the knowledge in the organizations. Selecting type of formal and informal information and knowledge to be recorded would be in respect to the intention for reuse. The goal for reuse will affect the level, detail and type of captured knowledge. According to Chandrasegaran et al. [4], design knowledge is obtained by interpretation of information deduced from computational results and factual quantities. They
206
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
stress, the definition of design knowledge could be varied for each designer depending on various contexts. Hicks et al. [5] state there are vast numbers and types of information and knowledge sources that are utilized throughout the design of an artefact. These may include product information, process information, technology information, explicit, implicit and tacit knowledge regarding activities, methodologies, discussions and meetings, as well as catalogued information, assemblies, parts, features, rules, and bill of material. Johansson in his research [6] classifies design knowledge into four types when it comes to metal forming industry which could be extended to other industries too: 1) heuristic which is generally found in different handbooks or company standards and are based on skilled engineer’s experiences, 2) analytical that derives from fundamental physical laws tends to be more complex than heuristic, 3) numerical that usually the common method is finite element method (FEM), and 4) empirical data which is based on experience. The engineer collects and stores this information in different repositories and formats, also associates it to different processes and knowledge sources. Because of the diversity of this information, classifying and structuring the information need significant effort. 1.2. Design rationale Design rationale is the representation of the reasons behind a design decision [7], but it could also be the justification for it, the design alternatives, and the evaluated trade-offs that led to the decision [8]. Access to the design rationale (if existing) is crucial to support the development of new products or modification of existing variants (design changes) and enables insight into the reasons for why a decision has been made. The design process of products, for instance automobile, might involve thousands of engineers and millions of decisions obtained based on the experience of the engineers over several years [9]. The research community has pointed out design rationale as a way to know the reasons behind a decision [2, 10]. According to Tang et al. [11], up to 85% of designers agree to justify the design by design rationale and up to 80% of the respondents say they fail to understand the reason behind a decision without design rationale support. Furthermore, almost 75% of the respondents forget their own decision’s reasoning from previous projects. It is interesting in this context to note that knowledge documentation in companies represents the final created solutions answering to the question what, rather than answering broader questions such as why, how and when [12]. The later part of the knowledge (explaining the reasoning and the way of creation), is usually missed or neglected in organizations. This could be due to priority, lack of time, lack of knowledge, or lack of adequate tools. The generated information (e.g. regarding aims, plans, procedures, products, and processes) ends up in different repositories (e.g. catalogue, computer’s memory, people’s memory) and it can hence be cumbersome to manage the design rationale. Defining what and how design rationale should be captured could be a fundamental problem and depends upon the aim and scope of the system, but overloading information should also be avoided [13]. 1.2.1. Capture Since the focus of a designer should of course be more on doing creative tasks not routine and monotonous jobs such as documentation there has to be some support. Regarding capturing design rationale, Regli et al. [13] discuss two major methods: 1)
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
207
User-intervention-based capture which is the documentation method created by the designers and the intention is to record the history of design activities as the design process evolves. This type of documentation helps people outside the project to understand the process and activities. 2) Automatic rationale capture which records the communication among the involved people to extract design rationale and decisions as they proceed in a design project. A drawback for the latter approach would be that some people might not feel convenient to record their communications including their e-mails and telephone calls. 1.2.2. Represent Tang et al. [11] categorize design rational representation approaches as being either template-based or argumentation-based. Template-based representations are approaches where standard templates for information input (standard forms and formats) are used incorporating into the design process. On the other hand, argumentation-based representation approaches use nodes and links. A node represents a component, while a link represents a relationship between components. This approach is a way to deliberate reasoning and decisions made during a design process. With argumentation-based methods the designer can easily trace the decisions and their relationships to the components/parts. Moreover, finding the relation of components to extra documents (Meta knowledge) or relevant information is possible. In addition, Regli et al. [13] mention descriptive approaches for representing design rationale that is usually used in dynamic design domains, in which the problem is unclear and where it is barely possible to understand the final solution. In a recent research which was carried out by the authors [12], a descriptive method for representing design rationale is presented. In that work emphasize is on design space solutions rather than standard existing solutions. The presented system suggests that the description could be created in the early stage of development process and evolves progressively in three steps: 1) conceptualization design, 2) design process and product family, and 3) design programming and manufacturing preparation. The description records the workflow and all design activities as well as describing why and when decisions are made and by whom. In addition, the description includes information regarding components, parts, assemblies, features, CAD-files, design rules, and material.
2. Related work and knowledge gap Considerable effort has been put to developing design rationale systems, but it seems that the developed design rationale systems are not in widespread use in industry and challenges still exist regarding effectively deploying the systems in industry [13]. A significant task is to capture and structure design rationale while also making decisions regarding the product. Usually, this parallel working is difficult to be performed. Anohter reason is that a separate sytems are used for the capturing of design rationale which then becomes an isolated activity and performed after making the product development decisions or even when the product is completely designed. While capturing design rationale is a significant task, structuring as well as simply accessing design rationale are at the same level of importance. Structuring will provide simple access to the collected knowledge and related information. Designers need access and knowledge about previous design activities in order to modify or redesign an existing solution or in
208
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
the development of a new product based on previous experience. As Baxter et al. [14] mention, around 20% of the designer’s time is spent searching for information and only 40% of design information requirements are met by documentation sources. This implies that design information and knowledge is not often represented in a simply accessible knowledge base. A fundamental problem in knowledge retrieval and sharing is the variety of applications and tools to represent the information. Elgh and Cederfeldt [15] describe a research focusing on documentation and management of product related knowledge. The purpose was to reveal problems related to the reuse of design rules at a case company. Investigations at company show that it is difficult for individuals to share their solutions. The reason given for this is that there is no present system for such documentation covering all organizational contexts (in that case Microsoft Word, CAD, and programming software). Moreover, they discuss the difficulty in communication among design engineers and design programmers and stress that people are reluctant to add a new application to the tools at hand in order to improve the situation. Such circumstances and also the need for information exchange leads the organizations to move towards integration of independent tools for the sake of bettering information representation and contextual communication [16]. An integrated representation of knowledge provides awareness of knowledge sources, access to the knowledge and support communication among system users. Sandberg et al. [17] propose an approach that provides knowledge retention and sharing across the product development process in CAD environment. They discuss a method that enables design rationale to be added within 3D annotations specified by the designers. This will allow design and documentation to be done in the same time. In that case, 3D annotations may also include general text notes or hyperlinks to other sources of information such as textual documents, figures, spread sheets and URLs. Although, the system prevents adding a new application in organization while allowing capturing design rationale in some extent, but limits its application to one specific CAE environment. DRed is an IBIS-based software that allows engineers to record their rationale as the design proceeds [10]. DRed represents design rationale in a document as a graph of nodes linked with directed arcs. The nodes are chosen from a predefined set of element types. A research was carried out on DRed [18] which implements an extension to DRed in order to enable the collaborative annotation of many types of non-DRed design documents. So, users can create bidirectional hyperlinks between selected locations in a range of external document types and DRed elements. The extension enables, for instance, linking a DRed element (in DRed environement) to specific range of cells in Excel and provides an image of the cells in demo-mode in DRed. As can be concluded from the above, it would be of high value for engineers to have a tool that keeps track of existing design rationale and presents it in format that is contextual adapted and easily accessible within the digital environment that the engineer is working in at the moment. It should also be possible for the engineers to capture design rationale within the different tools they are using in daily work.
3. Proposed method - information model and system architecture As mentioned, there is a high level of variety and diversity in design knowledge and therefore, classifying and structuring this knowledge is a necessity in order to make it easy to access. On the other hand, implementing new tools and applications might have
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
209
some drawbacks such as increased complexity of existing digital environment. This work aims to develop a method enabling instant capturing and accessing of design rationale in different digital contexts. A case has been selected for the purpose to perform experimental work and at the same time test and evaluate the proposed method in industry as a proof of concept. An emphasize is put on design tools used at the case company and a working pilot system is described in the end of the paper. The most commonly used tools at the case company are SolidWorks, Microsoft Word and Excel and they are selected as examples for an information model (see Figure 1) that constitutes the core of the proposed method. The information model consists of seven object classes, of which Design Rationale, Description, Connection Group, and Design Rationale Connection are general while Word Rationale, Excel Rationale, and SolidWorks Rationale targets specific software applications. Basically, the information model states that a Design Rationale consists of a set of Connection Groups and a set of Descriptions. A Connection Group on the other hand consists of a set of Design Rationale Connections that is a Word Rationale, an Excel Rationale, or a SolidWorks Rationale (the Design Rational Connection is an abstract class, and leaf classes has to be created for every software application). When using the information model the Design Rationale Connections carries information about where the actual design content is stored. These connections can be viewed as html-hyperlinks, which can point to a specific file on the hard drive or a web-page at a certain URL. This is a common solution presented by others and also supported by some software already, although only in one direction. However, in this work, these connections can be more specific than that, pointing to a specific range of cells in Excel, a specific feature or dimension in SolidWorks, or a certain bookmark in Word. Further, it supports multi-directional navigation and travelling back and forth between applications. This will enable the engineers to not just find a file or chunk of information that would further require extensive work to navigate within, it will directly point out related and essential information. The information model can be extended to target other software applications by implementing new types of Design Rationale Connection classes, indicated with dashed lines in Figure 1. The Connection Groups Design Raionale Folder Name
Description Name URI
Connection Group Name
Design Rationale Connection Document Name
System specific classes
MS Word Rationale BookMark Document
MS Excel Rationale Range WoorkSheet
Solidworks Rationale Selection ModelDocument
Figure 1. Class diagram for the Design Rational system.
210
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
are used to cluster the Design Rational Connections in joints that makes sense, so that it is possible to group bookmarks in different Word documents, ranges of cells in Excel spread sheets, and dimensions or features in geometrical models in SolidWorks that are related with each other in a natural way. Such a group can be viewed as and utilized as a highly advanced hyperlink that makes it possible to go back and forth from one connection point to another. So, for instance, when selecting a feature in SolidWorks that is pointed to by a Design Rationale Connection all other connections in its group could be monitored to the user making it possible to jump to the connected Excel, Word, or other SolidWorks entities. To make Connection Groups meaningful they are stored as a Design Rationale object that can contain a number of such groups that are somehow naturally connected to each other. The Design Rationale object can also have Description adding general information about the connected pieces of design knowledge. In the pilot system web-pages and pdf-documents are pointed to as Descriptions, but other types of documents could be added as description, if beneficial. 3.1. System architecture A database that keeps track of all the design rationale with descriptions and connections can be used as the backbone of the design rationale system but to monitor available design rationale to the user when selecting different entities in the targeted software applications, a more accessible user interface has to be developed. Hence, it is suggested to as far as possible develop add-ins to the targeted software applications in a standardized way so that the users feel comfortable and recognize the system and the functions it stands for. In the standardized add-in user interface there should not only be functionality for monitoring available design rationale but also to make new connections and in such an accessible way capture the design rationale. To make the system complete it is also beneficial to develop an editor in which it is possible to overview and manage all the design rationale. The editor can include a vast number of functions of which the most fundamental includes, structuring, editing, versioning, deleting, sharing, analyzing and accessing design rationale. An overview of the proposed system architecture is presented in Figures 2 and 3. The design rational system could be implemented either as local installations on engineers desktop and laptop computers, but it is most preferably installed as a centralized system on a server, as a cloud solution. In the former case, the design rationale is used for engineers like local scrapbooks to keep their own thoughts and connections repre-
Figure 2. The Design Rational objects link to a varaity of connections.
Figure 3. The Design Rationale system is distributed.
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
211
senting the engineers’ individual knowledge about the product. In the latter case, the design rationale stored on the extended enterprise central server (in the cloud) represents the corporate knowledge about its products. A cloud solution would also mean developing client apps (the addins), utilizing user identification facilities, and implementing measuring facilities of the usage of the system [19]. It is then also possible to interactively communicate selections of design rationale to other user in real-time across the globe (clicking on a certain cell in an Excel-sheet would highlight corresponding dimension in SolidWorks on a co-workers screen).
4. Proof of concept - a pilot system The selected company for conducting the case study is Thule Sweden, a manufacturer of roof racks for about 95% of all car models on the global market, both as a subcontractor in the automotive industry and retailing under its own brand. A roof rack is developed based on the collected geometrical data of the car’s roof. It contains two major parts called footpad and bracket necessary in order to mount the roof rack on the car’s roof without need for rail. The roof rack product is adapted to new cars by slightly redesigning the footpads and the brackets. As the number of developed brackets (more than 1200) and footpads (more than 500) are increasing in the company (due to need of redesign for adaption to new cars), the automation of retrieving previously developed solutions has been a big leap towards delivering the product cheaper and faster. As an example reusing an existing bracket cut the overall lead-time up to 40% annually during 2012 and 2013. There are a number of models and rules implemented in the automated retrieval system and the rational of these had to be captured to support future maintenance, expansion and reuse. The rational of the implemented models and rules were collected and structured on a Wiki-page during system development. Then a pilot system, based on the method described in this paper was introduced. Single or multiple
Figure 4. The Excel and Solid Works interfaces. Changing selection in one of the applications affects selections in the other application, which can run on the same or another computer.
212
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
SolidWorks-dimensions were associated with specific selections of information on the Wiki-page and with single or ranges of cells in the Excel-sheet containing information about all the brackets, see Figure 4 (where the wiki page is displayed as an add-in at the bottom of the Excel-application). A snapshot of the bracket modelled in SolidWorks is presented to the right in Figure 4. When the designer models the product in SolidWorks environment, a task pane on the right side of the design window is appearing (called Design Rational). The user can select any element of the product such as assembly, part or dimension annotation and add a design rationale element. As an example in Figure 4 a design rationale labelled Bracket Angle was created attaching the highlighted dimension to marked cell in the shown Excel spread sheet. It is also possible to add files or web pages to the created design rationale element, and in this specific case a wiki html-page for bracket selection was targeted. The relationships (DesignRationaleConnections and ConnectionGroups, see Figure 1) between the attached files and links to a design rationale are shown in the lower task pane called Connections. In the same way, by opening MS-Excel a similar task panes including previously created design rationale elements and their attachments will be present no matter which tool it was originally created in or on what computer. In other words, design rationale elements can be created in either SolidWorks, MS-Excel or Microsoft Word and be presented in the other tools and on other computers as well. The documentation in this case, a MS-Excel workbook includes table with crash testing information about all the brackets but might also contain design tables, design alternatives, design argumentations and even figures to descrie part’s dimensions and valid tolerances in detail. If the engineers know what s/he is looking for it is possible to browse for the design rationale, then the Browse tab is active. If the Active Selection tab is active then available selections will pop up whenever new selections are made in the applications, e.g. when selecting cell E6 in Excel Figure 4 the Bracket Selection design rationale will popup automatically.
5. Conclusion The work presented in this paper focuses on a method to enable instant and contextual capture and access of design rationale. The method supports multi-directional navigation and travelling back and forth between applications. This will enable the engineers to not just find a file or chunk of information that would further require extensive work to navigate within, it will directly point out related and essential information. An industrial case study conducted where three software applications common in engineering departments (SolidWorks, Microsoft Excel, and Microsoft Word) were integrated using a cloud architecture to share and manipulate information in an interactive and distributed way. Also, a wiki including general descriptions aiming to cover that part of information which is not recorded in other tools was integrated into the cloud. The major advantages of such integrated system is that the design rationale are always up-to-date and that communication of current design knowledge is enhanced since selections of connected design rationale can be displayed on several computers at the same time. So far, the pilot system is implemented as local installation on engineers’ computers but in order to evaluate the functionality of the system, a central server should be used by a group of engineers. In such scenario, there is need of a database to facilitate communication among the knowledge sources and also an editor to more efficiently
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
213
manage and edit the created design rationale. The editor can also enable searching and finding the required design rationale. Future work includes adding more Design Rational Connection classes for other software applications (e.g. Microsoft Power Point, since it contains important discussions and decisions during meetings, and other CAD-systems and engineering applications), investigating communication among system users and maintenance of the system.
Acknowledgments The authors express their gratitude to Thule Group for technical corporation and Vinnova Foundation for financial supports. References [1] [2]
[3]
[4] [5]
[6]
[7] [8]
[9] [10] [11] [12]
[13] [14] [15]
[16] [17]
S. Kim, R. Bracewell, and K. Wallace. Improving design reuse using context. in 16th International Conference on Engineering Design (ICED'07). 2007. Design Society. H. Wang, A.L. Johnson, and R.H. Bracewell, The retrieval of structured design rationale for the re-use of design knowledge with an integrated representation, Advanced Engineering Informatics, 26(2012), 251-266. K. Mohan, et al., Improving change management in software development: Integrating traceability and software configuration management, Decision Support Systems, 45(2008), 922936. S.K. Chandrasegaran, et al., The evolution, challenges, and future of knowledge representation in product design systems, Computer-Aided Design, (2012). B. Hicks, et al., A framework for the requirements of capturing, storing and reusing information and knowledge in engineering design, International journal of information management, 22(2002), 263-280. J. Johansson. A flexible design automation system for toolsets for the rotary draw bending of aluminium tubes. in DETC2007: ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. 2007. Las Vegas, Nevada, United States, American Society of Mechanical Engineers, New York, NY 10016-5990, United States. S.B. Shum and N. Hammond, Argumentation-based design rationale: what use at what cost?, International Journal of Human-Computer Studies, 40(1994), 603-652. D. Falessi, G. Cantone, and M. Becker. Documenting design decision rationale to improve individual and team design decision making: an experimental evaluation. in Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering. 2006. ACM. S.D. Eppinger, et al., A model-based method for organizing tasks in product development, Research in Engineering Design, 6(1994), 1-13. R. Bracewell, et al., Capturing design rationale, Computer-Aided Design, 41(2009), 173-186. A. Tang, Y. Jin, and J. Han, A rationale-based architecture model for design traceability and reasoning, Journal of Systems and Software, 80(2007), 918-934. F. Elgh and M. Poorkiany. Supporting Traceability of Design Rationale in an Automated Engineer-To-Order Business Model. in Proceedings of Design 2012, May 21-24, 2012, Dubrovnik, Croatia. 2012. W.C. Regli, et al., A survey of design rationale systems: approaches, representation, capture and retrieval, Engineering with computers, 16(2000), 209-235. D. Baxter, et al., An engineering design knowledge reuse methodology using process modelling, Research in engineering design, 18(2007), 37-48. F. Elgh and M. Cederfeldt, Documentation and Management of Product Knowledge in a System for Automated Variant Design: A Case Study, in New World Situation: New Directions in Concurrent Engineering. 2010, Springer, 237-245. M. Lundin, Knowledge Retention and Reuse. S. Sandberg, et al., Supporting engineering decisions through contextual, model-oriented communication and knowledge-based engineering in simulation-driven product development: an
214
[18] [19]
J. Johansson et al. / Design Rationale Management – A Proposed Cloud Solution
automotive case study, Journal of Engineering Design, 24(2013), 45-63. R. Bracewell, et al. Extending design rationale to capture an integrated design information space. in 16th International Conference on Engineering Design (ICED'07). 2007. Design Society. D.C.I. Rountree, The basics of cloud computing understanding the fundamentals of cloud computing in theory and practice, Elsevier Science, Burlington, 2013
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-215
215
Semantic Modeling of Dynamic Extended Companies Kellyn Crhis TEIXEIRAa,1 and Milton BORSATO b,2 Federal University of Technology – Parana, Av. Sete de Setembro 3165, Curitiba, PR 80230-901, Brazil b Federal University of Technology – Parana, Av. Sete de Setembro 3165, Curitiba, PR 80230-901, Brazil a
Abstract. The scenario of manufacturing is going through intense transformations. Mass manufacturing converged to mass customization, the globalized world now allows a wide repertoire of solutions for the production of goods and services. And in this new environment companies are using new technologies and new nonvertical manufacturing structures. Therefore, companies are increasingly relying on resources controlled by other actors. They must be able to combine resources in new ways, either to access those that are additionally required or those own resources to be made available to third parties. One of the challenges of manufacturing in the twenty-first century is the possibility of forming extended companies, able to respond with agility and adaptability, and which can be created on demand from the achievement of technical and strategic alignment of various actors. The extended enterprise, formed by several partners, needs to be dynamically formed, in order to be agile and adaptable. Such companies would be: (i) made possible through sharing and dissemination of information; (ii) able to, quickly and inexpensively, design, optimize and manufacture products; (iii) responsive to all technical and business determinants; and (iv) assessed and certified for guaranteed performance. This work intends to present a solution to the composition of the extended enterprise, dynamically formed to take advantage of market opportunities quickly and efficiently. The proposed model is constructed and verified in the context of the oil and gas industry. The main deliverables of the present work are: (i) report on the state-of-the-art regarding: supply-chain management, business virtualization, development and approval of suppliers, model-based enterprises (companies based on models), standards for information integration; (ii) designing a model of dynamic supply chain formation; and (iii) verification of the model through its application in a case study in the oil a and gas industry. Keywords. Model of dynamic supply chain formation, semantic modeling, supply-chain management, model-based enterprises.
Introduction Rapid changes and complexities in business environments have stressed the importance of interactions between partners and competitors, leading supply chains to become the most important element of contemporary business environments [1]. One of the 1 Corresponding author. Tel.: +55-41-3248-6744; mobile: +55-41-9631-2656;
[email protected]. 2 Corresponding author. Tel.: +55-41-3029-0609; e-mail:
[email protected].
e-mail:
216
K.C. Teixeira and M. Borsato / Semantic Modeling of Dynamic Extended Companies
challenges of the twenty-first century manufacturing is the possibility of forming extended companies, able to respond with agility and adaptability, and which can be created on demand from the technical and strategic alignment of various actors [2]. The stage of manufacturing is suffering transformations. The mass manufacture converged to mass customization, the globalized world offers a wide repertoire of solutions for the production of goods and services. Due to the need of companies to expand their production capacity related to manufacturing as well as product development, companies are using new technologies and new non-vertical structures manufacturing. Among these companies are seeking alternatives to outsourcing services, internationalization and virtualization products. The virtual enterprise term has been used in the articulation of strategy for global manufacturing companies of the XXI century [3]. This work composes a part of a framework project called Intelligent Manufacturing Program. This program is based on the survey named Imperatives of Intelligent and Integrated Manufacturing, proposed by IMTI, and presents 10 demands. One of them is related to suppliers, and it is named demand number 7: Model Ready Supply Network [2]. Demand number 7: Model Ready Supply Network, which refers to the definition of suppliers model profile in all aspects company [4]: “Imagine an extended enterprise with great agility and adaptability that can be formed on demand from numerous bestin-class suppliers. Enabled by pervasive information sharing, capable of rapidly and cost-effectively designing, optimizing and manufacturing products, responsive to all defined business and technical drivers, evaluated and certified for assured performance.” The challenges of this demand number 7 are: • • • • • •
Configuring dynamically the correct solution based on correct requirements; Providing security and protection across the enterprise; Supporting data and information needs; Not only to be compatible, but to obey the established standards; Establishing clear communication for close collaboration; Searching the extended enterprise integrated and interoperable.
In the knowledge-based economy, whoever owns knowledge and can create knowledge from existing knowledge will enjoy absolute advantages over the business competition. Many studies have focused on how knowledge sharing between organizations within a strategic alliance and the resulting competitive advantages are related. However, a dilemma between cooperation and competition has also been found in knowledge sharing organizations within a strategic alliance. Cooperative relationships with other organizations will be well positioned for developing sustainable competitive advantages. Therefore, knowledge must be exchanged and shared effectively and efficiently between all enterprise members collaborating on product development [5]. To that suppliers may be prepared to meet the market demand a network of suppliers that support the sharing of information seeking performance guarantee is required. Seeking competitive advantage now many companies have focused on its supply chain and consequently thought of ways to improve managing the supply chain [3]. The integration of key business processes from end user through original supplier
K.C. Teixeira and M. Borsato / Semantic Modeling of Dynamic Extended Companies
217
that provides products, services and information that add value for customers and other stakeholders is defined as SCM (Supply Chain Management) [6]. Another definition for Supply chain (SC) is a network of facilities, distribution options, and the approaches used to efficiently integrate suppliers, manufacturers, distributors and performance of the functions of procurement of materials, transformation of these materials into intermediate and finished products, and distribution of these products to customers in the right quantities, in the right places at the right time in order to meet the required service level with minimal cost [7]. Although there has been extensive literature on supply chain interactions research, perspectives and approaches these proposals vary widely [8]. Some investigations have been conducted focusing on the impact of information sharing on product quality, but there is still scope for studies to clarify exactly how and what information should be shared and beneficial effect on quality improvement. The researchers of supply chain related to ontology have not built enough wide theoretical basis relevant to SCM [6]. The concept of ontology is applied to systematically document shared knowledge. on issues and problems in the field of supply chain Ontology development to supply chain is a collaborative process that crosses the individual organizational boundaries of its members, which has knowledge capture, assembly, storage and dissemination [7]. In this supply chain context there is necessity for information or exchange of knowledge between firms. Meanwhile, information or knowledge in mass are scattered in various formats between different enterprise systems, leading to problems of semantic interoperability between information systems and existing business [8]. The goal of this paper is to contribute to the formation of the supply chain, featuring a design of a model of dynamic supply chain formation. The verification shall be performed on one of the oil and gas industry. Through model for supplier network enable the configuration of a solution to meet the technical requirements, quality and delivery time needed for the type of market they will be applied. Section 1.1.1 will present networks of suppliers demonstrating forms of partnership and the need for integration between suppliers, in section 1.1.2 will be provided the context of semantic interoperability.
1. Theoretical Background
Two subjects are considered important to the conception of this work: Supply Network and Semantic Interoperability, and they are described in the next section. 1.1. Supply Network With the advent of global manufacturing companies cannot be seen in isolation. The enterprise collaboration is not only between two partners, because they evolved to what was described as "business networks", in forms of partnerships in the supply chain, extended enterprises and virtual enterprises [8]. 1.1.1. Extended Dynamic Network The extended dynamic network is a concept that evolves from the notion of many supply chains interacting together in an environment that is supported by Information and Communication Technologies. The Information and Communication Technologies
218
K.C. Teixeira and M. Borsato / Semantic Modeling of Dynamic Extended Companies
element facilitates the interaction of the previously discreet supply chains creating the extended network. It also enables the relationships between the entities to change, thus allowing a dynamism that would not have been possible in the past and giving rise to concept of extended dynamic network [9]. The representation of the involvement of customers and suppliers in product development in a business model enables a single integrated view of the relationship between customers and suppliers, in other words, analyzing the process of product development with the concept of "extended enterprise". This is a newer vision to and need for the present day where inter-institutional arrangements form an increasingly important aspect of the business environment, especially when it deals with the process of product development and its strong trend toward global developments with scattered activities several parts of the world [10]. As pointed out by Mattos and Laurindo [11], with respect to the extended enterprise among the various concepts can mention the name as co-makership, which means a long-term relationship with a limited number of suppliers on the basis of mutual trust. The co-makership allows partners to work together to add value to products, and to develop simplified means of ordering and bill aiming to improve quality and reduce costs to all parts. A characteristic pointed out for extended company is that best suits the organizations in which a "main" company that is dominating, "extends" its boundaries to all or some of its suppliers. Create an extended enterprise is not only to integrate suppliers and customers, involves a complex alignment of processes, technology architecture and corporate culture. 1.1.2. Networks between companies In accordance with Sheresheva and Kolesnik [12], companies rely increasingly on controlled resources by other actors and thus are able to combine resources in new ways, to obtain additional resources and dispose of superfluous resources. The era of mass manufacturing is converging to the era of mass customization, and for this new environment companies are using new technologies and new manufacturing concepts [4]. Among these strategies is the formation of networks between companies, a current practice which aims to ensure the survival and competitiveness, thereby creating a new organizational architecture and innovating in the formation of relationships between companies. In dynamic network there is an intense and variable relationship between the companies themselves. It is the most flexible and open network model, as well as what best fits to the conformation of virtual enterprises, in which each participant contributes their core competencies so that the network has significant competitive advantages as a whole [13]. 1.1.3. Outsourcing As indicated by Bhalla and Terjesen [14], advances in information and communication technologies have enabled new companies to seek outsourcing of value creation, such as software development, engineering and research & development activities. Outsourcing is a strategic move that involves both outsourcing companies nonexistent activities that cannot be completed at home in the past, or replacement of internal activities, transferring these in part or whole to a third-party vendor that performs the task, function or process.
K.C. Teixeira and M. Borsato / Semantic Modeling of Dynamic Extended Companies
219
The new outsourcing companies with highly integrated suppliers tend to provide access to a wider network of suppliers, achieve operational knowledge of your better class and avoid the opportunism of the supplier as they face lower levels of specific investments of the relationship. This is crucial for new companies as they face adverse initial resource and capacity barriers such as a lack of talent and operational know-how, presented by liabilities of newness and smallness. In response to these difficulties, new businesses must mobilize resources in unusual ways, while saving the need for resources. The formation of relationships with suppliers is attractive for new businesses, as it opens the possibility of exploring competencies of suppliers [14]. 1.1.4. Internationalization Another option to expand operations is the internationalization strategies that have been the focus of international business and global strategy. Internationalization is the process by which firms increase their involvement in operations across borders. In this context the theory of Uppsala was identified as the most important model, because it focuses on this incremental internationalization process and suggests that firms choose sequentially markets according to their perceived proximity, ie, with a low degree of distance [15]. 1.1.5. Supply chain Relationships with suppliers can support the overall corporate strategy. A study presented a model where the supply chain is involved in both the development and sustainability of competitive strategy as challenges that are beyond the operational level. This study was based on direct interviews with managers and enterprise level supply chain. And it was shown is that the basic competitive strategy of an enterprise is logically integrated with strategic vendor approach is necessary to ensure the consistency of the social and economic elements [16]. According to Deligonul et al. [16], the supply chain of the company should be considered as a good that deserves long-term commitment and worthy of investment difficult to reverse. Aligning the supply chain requires policies that nurture the innovative capacity of the system to eliminate substandard practices. To improve understanding of the managers on the conditions of suppliers through greater interaction and exchange of knowledge, a partner can make better use of their own resources invested in relationships with suppliers as well as activate and utilize resources for the benefit of the supplier mutual party. This relationship between suppliers can occur communication problems between humans or between systems or between human and system that are caused by the lack of semantics between humans and systems to understand them exactly and commonly [17]. 1.2. Semantic Interoperability Specifically, the ontology is used to aid communication between human agents to achieve interoperability between computer systems, or to improve the process quality of the software engineering systems. The ontology includes definitions of concepts and an indication of how concepts are inter-related which collectively impose a structure on the domain and constrain the possible interpretations of terms. In this context it is necessary to a model of semantic information that is an information model in which the meaning of the data can be interpreted from the model itself, without the need to
220
K.C. Teixeira and M. Borsato / Semantic Modeling of Dynamic Extended Companies
consult a meta-model or external documentation. A model of semantic information is written in a formal language, as formal English, using a dictionary-formal taxonomy, as well as the types of formal expressions (relations) and a formal syntax [17]. According to Gellish [18], the method defines a universal data structure which allows all databases and messages have the same data structure and use the same language as defining initial content. This will allow the software to interpret the semantic expressions in multiple databases and messages, and that different databases can be treated as if they were a distributed database. Interoperation of databases enables verification and consistency management, as well as combination of its content.
2. Methodological Aspects This work has been structured in three phases: (i) Research of State of the Art; (ii) Model Construction; (iii) Validation Model. In the first phase, a research related to Supply chain management were accomplished, showing as a result a mental map of the types of information shared in supply chain. The other research was related to virtualization, the result is a list of the types of shared information in virtualization. The last topic of the research is Development and homologation of suppliers as a result it is presented a list of requirements for selection and certifications. In addition to these topics, Semantic Interoperability was studied and it was accomplished a research related to Model-based enterprises and it was elaborated a diagram. The subject standards for information integration was explored and as a result it was obtained a list of standards for information integration. In the second phase named Model Construction, firstly of all, a scenario was described using as scene an oil and gas industry, considering the relation with the suppliers, the requirements, the needs and a desired future view. Then, it will be treated the construction of the template definition in the form of ontology following two steps: Preliminary Model and Detailed Model. Preliminary Model: in this step it will be determined the requirements and it will be accomplished the definitions of architecture modularization, the components of the ontology will be defined. Detailed Model is related to Details of Modules and Integration of Modules. Details of Modules: with the use of Protégé software, modules of the ontology will be built, detailing class, subclass, attributes, properties and instances; Integration of Modules: with the use of Protégé software, the modules will be mixed and a single ontology will be represented showing the fusion of modules. The chosen tool is Protégé. The Protégé system is an environment for knowledgebased systems development that has been evolving for over a decade. Protégé began as a small application designed for a medical domain (protocol-based therapy planning), but has evolved into a much more general-purpose set of tools. More recently, Protégé has developed a world-wide community of users, who themselves are adding to Protégé sapabilities, and directing its further evolution. Protégé is a tool that helps users build other tools that are custom-tailored to assist with knowledge-acquisition for expert systems in specific application areas [19]. The chosen language is OWL, which is a new ontology language for the Semantic Web, developed by the World Wide Web Consortium (W3C) Web Ontology Working Group. OWL was primarily designed to represent information about categories of objects and how objects are interrelated— the sort of information that is often called an
K.C. Teixeira and M. Borsato / Semantic Modeling of Dynamic Extended Companies
221
ontology. OWL can also represent information about the objects themselves—the sort of information that is often thought of as data [20]. The third phase deals with the validation procedures for model definition in the form of ontology and it is divided into two stages: Definition of Proof of Concept and Application of the Model. Definition of Proof of Concept: in this step the criteria of selection for choice of case study will be listed, the product or service for testing will be selected based on the collected criteria and questions that the ontology should answer will be determined. Application of the Model: in this step it will be the preparing for implementation of the model: with the use of Protégé, queries will be created, the data that will populate the model will be prepared through created queries; In order to do the test, queries will be processed; As results will be obtained the compiled queries answers, final report of the knowledge by the query responses will be generated; Finally in the analysis of results, it will be checked whether objectives were achieved with ontology created. It will be generated a report with confirmation of meeting the objectives of the model definition.
3. Final Remarks The oil and gas industry must to be prepared to meet the needs of the clients, and it is necessary to be competitive in deadline, quality, price and following the requirements and standards. In order to fulfill this performance, the relation with the suppliers must to be partnership and the formation of the chain must be dynamic. In order suppliers might be prepared to meet the market demand, a network of suppliers that support the sharing of information seeking performance guarantee is required. The dynamic formation of suppliers is one of the challenges related with XXI century. This work will use the ontology and the validation of the model will be provided in a company in the oil and gas performance. The present work is under development work. The proposed model, in the form of OWL ontology will contribute to scientific knowledge for current researchers in the area and for the oil and gas industry. The expected result is a model of dynamic formation of supply chain, able to be formed during different phases of the project, following questions as deadline, cost and requirements of the product. After this work be concluded, it will be possible to be tested in the oil and gas industry for many real products and services in order to do a dynamic formation of suppliers. For future work is suggested to implement ontology based applications that can use data provided by CAD systems and other sources. In addition, other kind of industries that need particular requirements can be modelled and tested for capacity for adaptation of the current approach.
References [1] [2] [3]
T.L. Friesz, I. Lee, C.-C. Lin, Competition and disruption in a dynamic urban supply chain, Transportation Research Part B: Methodological 45 (2011) 1212-1231. IMTI, (2014). Z. Lotfi, M. Mukhtar, S. Sahran, A.T. Zadeh, Information Sharing in Supply Chain Management, Procedia Technology 11 (2013) 298-304.
222
[4] [5] [6] [7] [8]
[9]
[10]
[11] [12] [13] [14] [15] [16] [17] [18] [19]
[20]
K.C. Teixeira and M. Borsato / Semantic Modeling of Dynamic Extended Companies
P. Leitão, F. Restivo, An agile and cooperative architecture for distributed manufacturing systems, IASTED, 2001. T.-Y. Chen, Knowledge sharing in virtual enterprises via an ontology-based access control approach, Computers in Industry 59 (2008) 502-519. T. Grubic, I.-S. Fan, Supply chain ontology: Review, analysis and synthesis, Computers in Industry 61 (2010) 776-786. C. Chandra, A. Tumanyan, Organization and problem ontology for supply chain information support system, Data & Knowledge Engineering 61 (2007) 263-280. Y. Lu, H. Panetto, Y. Ni, X. Gu, Ontology alignment for networked enterprise information system interoperability in supply chain environment, International Journal of Computer Integrated Manufacturing 26 (2013) 140-151. I. Hunt, B. Wall, H. Jadgev, Applying the concepts of extended products and extended enterprises to support the activities of dynamic supply networks in the agri-food industry, Journal of food engineering 70 (2005) 393-402. H. ROZENFELD, D.C. AMARAL, Requisitos para a Criação de Modelos de Referência para o Processo de Desenvolvimento de Produto Considerando a Participação de Fornecedores, Congresso Brasileiro de Engenharia Mecânica. Anais. Águas de Lindóia, SP, 1999. C.A. Mattos, F.J.B. Laurindo, Integração Eletrônica e a Empresa “Estendida”: Estudos de Caso, XIII SIMPEP - Bauru, SP, Brasil (2006). M.Y. Sheresheva, N.A. Kolesnik, Stochastic perspective of industrial distribution network processes, Industrial Marketing Management 40 (2011) 979-987. M.E.L. Olave, J. Amato Neto, Redes de cooperação produtiva: uma estratégia de competitividade e sobrevivência para pequenas e médias empresas, Gestão & Produção 8 (2001) 289-318. A. Bhalla, S. Terjesen, Cannot make do without you: Outsourcing by knowledge-intensive new firms in supplier networks, Industrial Marketing Management 42 (2013) 166-179. C. Prange, S. Verdier, Dynamic capabilities, internationalization processes and performance, Journal of World Business 46 (2011) 126-133. S. Deligonul, U. Elg, E. Cavusgil, P.N. Ghauri, Developing strategic supplier networks: an institutional perspective, Journal of Business Research 66 (2013) 506-515. D. Kang, J. Lee, S. Choi, K. Kim, An ontology-based enterprise architecture, Expert Systems with Applications 37 (2010) 1456-1464. Gellish, What is a Semantic Information Model, (2014). J.H. Gennari, M.A. Musen, R.W. Fergerson, W.E. Grosso, M. Crubézy, H. Eriksson, N.F. Noy, S.W. Tu, The evolution of Protégé: an environment for knowledge-based systems development, International Journal of Human-computer studies 58 (2003) 89-123. I. Horrocks, P.F. Patel-Schneider, F. Van Harmelen, From SHIQ and RDF to OWL: The making of a web ontology language, Web semantics: science, services and agents on the World Wide Web 1 (2003) 7-26.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-223
223
Human Expertise as the Critical Challenge in Participative Multidisciplinary Design Optimization - An Empirical Approach Evelina DINEVAa,1, Arne BACHMANN b, Uwe KNODT b, Björn NAGELa Air Transportation Systems, Deutsches Zentrum für Luft- und Raumfahrt e.V. (German Aerospace Center) b EIWis, Deutsches Zentrum für Luft- und Raumfahrt e.V. (German Aerospace Center) a
Abstract. Research into future air vehicles incorporating novel technologies is characterized by a high number of interacting disciplines which need to be considered. Despite advances in numeric interfacing techniques for participative Multidisciplinary Design and Optimisation (pMDO), it is not well understood how to build a team of specialists who jointly operate shared tools and gain system level insight. This contribution shifts focus to the human MDO participants and their working environment. Three aspects of collaboration are considered: (a) design of cognitive experiments to measure engineering performance in different settings; (b) integration of prior experience through a Lessons Leaned process; and (c) the application of the above into the enhancement of Integrated Design Laboratory (IDL). The pronunciation of competence and working environment, rather than software tools or data, opens opportunities for attractive use cases.. Keywords. Collaborative performance, expert interview, performance measures, empirical research, aircraft design, multidisciplinary design and optimisation (MDO), participative MDO (pMDO).
Introduction Research into future air vehicles incorporating novel technologies is characterized by a high number of interacting disciplines which need to be considered [1,2]. High levels of fidelity are often mandatory. Multidisciplinary Design and Optimization (MDO) provides techniques which interlink heterogeneous analysis tools in distributed workflows to drive the design into optimum solutions. Although numerical approaches have become powerful enough to solve many complex problems of computing, the operation of extensive analysis systems still poses a major challenge today. However, in contrast to numeric interfacing techniques, it is not well understood how to build a team of specialists who jointly operate shared tools and gain system level insight [3,4]. This contribution discusses three critical aspects of collaborative performance. Firstly, experimental investigations are presented that specify relevant psychological and cognitive aspects [5]. Specifically, a tool-box of collaborative performance measures is introduced and first results are shown. Next a Lessons Learned approach at 1
Corresponding Author: Evelina Dineva, Blohmstraße 18, 21079 Hamburg, Germany; E-Mail:
[email protected].
224
E. Dineva et al. / Human Expertise as the Critical Challenge
the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt, DLR) is presented that can support the quest to understand human factors in participative MDO. The fundamental knowledge about mechanisms of collaboration is informative for the development of DLR’s Integrated Design Lab (IDL). Finlay, the IDL is introduced as working space for both - to conduct collaborative design in aerospace projects, and to investigate collaboration methodologies. Besides visualization and communication, techniques for handling knowledge constitute a central element of the IDL. Experience shows that design in teams of heterogeneous experts requires innovative practices and methodologies in collaboration, taking into account the different stakeholders’ views. The paradigm shift toward the pronunciation of competence, rather than tools or data, opens opportunities for a joint system competence with attractive business cases for all stakeholders.
1. Experimental Research We designed an experimental paradigm to probe how different forms of visualisations can influence engineering performance. The experimental design is based on preceding numerical tests [6] and pilot experiments [5], which helped narrow down a set of input and output parameters for an aircraft design. Experiments are performed via a Graphical User Interface (GUI) task that allows to (a) control different visualisation versions; (b) simplify the usage for the participants; and (c) track participants behaviour. 1.1. Participants A total of 14 engineering students (four female) who had not yet taken any courses in aircraft design were recruited to participate in the experiment. According to the ethics regulations of the German Psychology Associations (DGB and BDP) [7], each participant provided voluntarily informed consent. 1.2. Material Two high-performance laptops were set up with the necessary software tools. To keep laboratory investigation close to real work scenarios, experimental studies are based upon VAMPzero [8,9], which is the software tool used to study preliminary aircraft design configurations at DLR. Calculations are initialised with a data set comparable to the Airbus aircraft A320-type that is provided as a CPACS file (CPACS is a xmlbased common language for aircraft design, see [2]). To create new designs, participants can interactively modify control parameters of the A320-type data set and then iterate VAMPzero through a GUI. In a previous study [6], a set of control (i.e. input) and output parameters were narrowed down, as given in Table 1. The control parameters are Bypass Ratio (BPR), Wing Span, and Design Range; and the output parameters are Direct Operating Costs (DOC) and Operating Empty Mass (OEM). Table 1. Input and Output parameters of the aircraft design task. type name range step-size unit
design range 350 - 7000 eight discrete [km]
control parameters wing span bypass ratio 14 - 44 3.5 - 7 continuous continuous [m] [-]
output parameters DOC OEM 4000 - 12000 3 - 130 n/a n/a [EUR/h] [t]
E. Dineva et al. / Human Expertise as the Critical Challenge
225
Input and output parameters ware displayed either as plots or as tables, depending the respective experimental condition, Plots or Tables. Thereby two different versions of the GUI were used, both are shown in Figure 1. Each GUI is divided in three subpanels: x
input parameters could be set and trails iterated via the Control panel (top);
x
the history of input values was displayed in the Input Display panel (middle);
x
the resulting output history was the Output Display panel (bottom).
Figure 1. Plots and Tables versions of the experimental GUI.
1.3. Procedure Participants were randomly assigned to either the Plots or Tables condition. On each trial, they could manipulate the given control parameters (Table 1) and then run VAMPzero. When the iteration was completed, the input- and output parameters were displayed in the respective GUI panels (Figure 1). Based on these results, participants could interactively optimise an aircraft design. All participants were explicitly advised that they should optimise their designs with respect to both output parameters DOC and OEM. They had a maximum of 25 trials or 40 minutes, whichever came first, to complete the optimisation. 1.4. Analysis For each participant, all input and output parameters (Table 1) were recorded per trial. For trials with non-feasible designs, the outputs were set to NaN. A time-step for each start of a trial and the duration of the entire experiment were also recorded. Additional data-items were also collected but are not subject of the current analysis. The focus here is on the effects of Quality of information that is compared in the Plots versus the Tables conditions. The effects of the experimental control variable Quality are analysed with respect to the following dependant variables: min-DOC the minimal DOC value, that a participant has achieved; min-OEM the minimal OEM value, that a participant has achieved; min-COMB the combined minimum of (OEM+10DOC), which a participant has achieved among all their values (the factor 10 was selected to offset the different orders of magnetite in VAMPzero calculates the two parameters); Duration the time a participant needed to finish the design session (out of maximal 40 minutes);
226
E. Dineva et al. / Human Expertise as the Critical Challenge
Trials the number of trials a participant needed to finish the design session (out of maximal 25 trials). The dependent variables are reported in Figure 2 in dedicated subplots from top to bottom. Each subplot shows how the dependent measurements (values on the y-axis) change with respect to the experimental condition (it assumes the values Plots or Tables on the x-axis, these are aliened among subplots). Note that for compression these values of the dependent measures are scaled to a decimal power of two. Per subplot, the following is shown: (a) values of the experimental variable for each individual participant (narrow light grey bars);
Figure 2. Experimental results: in each subplot a dependent measure is plotted on the y-axis against the experimental condition on the x-axis.
(b) means of the experimental variable among participants in the given condition (broad dark grey bars); (c) standard deviation per condition (black error-bars, assigned to the means in (b), means are connected with a black line); (d) mean values and standard deviation values are reported in the labels for each combination of control and dependent variables. No significant effects could be found. This might be due to the small number of participants of only seven per conditions. Thus, we are continuing to collect data. Few interesting tendencies can be observed. It is worth noting that these tendencies persist when the same analysis is done with an extended data set of additional seven participants who performed the same task but were provided with more information [5].
E. Dineva et al. / Human Expertise as the Critical Challenge
227
On average, participants in the Table condition tend to be faster and to achieve a better combined minimum, min-COMB, than these in the Plots condition. The later is a surprising finding, as we were anticipating that global patterns would be easier to identify in the Plots condition. What might be the case, is that participants in the Plots condition focus more strongly on minimising the DOC values. This, in turn and better to be seen in plots than in tables, might be because the DOC parameter is more sensitive to changes of the input parameters than OEM. That the Table condition is faster, both shorter Duration and less Trials, is less surprising. Subject tend solve the task more efficiently when using tables because, as some of them report, they simply optimise values and, with exceptions, do not try to conceptualise what the aircraft design is about. This highlights the need to compare the results of the numeric iterations task at hand with a task that requires conceptual thinking. Such an experimental extension will help indicate which types of task are better supported with which visualisation type. 1.5. Conclusion The focus of this study is to identify the role of context: Do different types of visualization have an impact on how people of an aircraft design task? To measure the impact of visualisations, the underlying task is intentionally kept simple. The critical observation from the above Analysis and the proceedings studies [5,6] about the experimental approach is that we need find way to extend the task to assess conceptual level thinking. Still, the task needs to be as simple as possible but also as complex as necessary. One avenue to better capture the right level of complexity is to investigate how people operate in current participative MDO projects. For this, a lessons learnt process should be implemented in the participative MDO projects. Lessons learnt are based on debriefing project members and aim to capture what worked well in project and what has failed. To gain a comprehensive empirical approach to the human factors in participative MDO, the experimental studies should be linked to a systematic Lessons Learnt process.
2. Lessons Learnt The innovative nature of projects entails that the project participants gain new insights constantly during a project [10, p. 5]. If they document such new knowledge in an appropriate manner, this knowledge becomes organisational experience value or “Lessons Learnt”, especially for the project participants. Lessons Learnt, both of positive and of negative experiences are derived project experience and describe accordingly optimisation opportunities, chances or risks. Lessons Learnt can relate to aspects of management (e.g. organisational) and the project object (e.g. project approach). The main feature of Lessons Learnt is that it is based on practical experience and is not derived theoretically. In the right context, the benefit of Lessons Learnt is therefore very high, but is has to be clear to all project partners, that Lessons Learnt is not only a document which has to be created to close a project formally. More than that, Lessons Learnt can be of a high value if they are available for other project managers in the same organisation before they start a new project [11, p. 133f.].
228
E. Dineva et al. / Human Expertise as the Critical Challenge
DLR has gained a lot of experience with Lessons Learnt. As many organisations worldwide, DLR uses the standards of the Project Management Institute (PMI) in project management. To close a project, it is necessary to have the Lessons Learnt document finished and accepted. On the other hand, Lessons Learnt are an important part of the project quality management [10, p. 214]. To increase the value of Lessons Learnt it is recommended, that Lessons Learnt are understood as a periodic process which accompanies a project during its life cycle [12, p. 288f.]. The project manager can use the methodology of Lessons Learnt during the project fulfilment. A good Lessons Learnt process starts with the project itself. It has to evaluate both the positive and also the negative results and incorporate the causes / actions in a standard process for projects. The positive aspects are important to confirm the process and consolidate. The negative experiences are required to identify the causal relationships between cause and false results in order to derive meaningful measures or action plans for the future. Measures are necessary to avoid the repetition of experienced negative results. This implies that such a standard process for Lessons Learnt exists. The knowledge management team of DLR developed an improved standard process for internal Lessons Learnt, captured in Figure 3. If the process is implemented, it raises project quality. For instance, Lessons Learnt methods show that many problems can be observed at an early stage and can be solved before they affect the project success.
Figure 3. The DLR Lessons Learnt three step process.
In the first step, the project knowledge gets captured (Figure 3). Information about the output data (cost, time, results etc.) can be found in the project controlling or reporting. Not only the project manager has the relevant information about the project but the project team members have it, too. Besides a debriefing of the project manager a workshop with all relevant project members should be taken into account periodically in a project. Then the relevant information must be summarised and written down into the Lessons Learnt document (step 2). To have an advantage in further projects from the Lessons Learnt they need to be disseminated. Therefore, in step 3 the knowledge gained has to be transferred in databases, networks other projects etc. Knowledge management techniques can help to share the Lessons Learnt with other employees of the organisation with social collaboration tools like wikis. After the dissemination of Lessons Learnt, a final discussion with the customer considering the Lessons Learnt should lead to close the project. This way the Lessons Learnt are still available for other projects.
E. Dineva et al. / Human Expertise as the Critical Challenge
229
This process is most effective, if Lessons Learnt from other related or similar projects are regarded before a project begins. In this way the project team members can learn from other projects by avoiding failures and adopting positive effects which were found out before.
3. The Integrated Design Laboratory In order to extend DLR’s capabilities for tightly coupled multidisciplinary collaboration processes, the development of a real, tangible laboratory plays an equally crucial role in testing innovative concepts as assessing Lessons Learnt and developing new modes and methods for interdisciplinary software integration. The DLR project iTALENT was initiated at the institute of Air Transportation Systems and subsequently funded by the city of Hamburg in 2010/2011 [13]. The project targets the use of laboratory rooms and to provide essential equipment for participants and research personnel. The original goal of iTALENT was to establish a laboratory and simultaneously determine what tools and techniques are needed for engineers to work most efficiently in it. The mustered total of all tools, know-how, and equipment gathered in the laboratory is supposed to provide a blueprint for further instantiations of similar labs, and by steady recurrent self-critical analysis, improve collaborative teamwork within the IDL on site. The agile, Lessons Learnt approach allows to be flexible enough to include intermittent research results within the time scope and between related projects. One of these derived laboratories is scheduled to be set up within the Center for Applied Aeronautical Research (ZAL) [14] which is currently under construction and being staffed in Hamburg. When the project was initiated, there were mainly strategic goals, which had to be translated into more technical, measurable objectives. The former include the strengthening of aviation clusters between industry, research, and education (knowledge triangle), as well as boosting the competitiveness of manufacturers and suppliers in the larger Hamburg region. Viewed from a different angle, the aim is to improve system comprehension of highly complex (air transport) systems and accelerate the assessment and development of new technological concepts by approaching them in a holistic way. Since mistakes made in early program phases bear the highest costs, this again has the potential to re- duce overall cost [1,4], time to production and allow more studies to be performed before deciding on one concept. To reduce the large option space for the laboratory construction, several live project examples and artificial or potential use cases were examined to elicitate requirements. By pairing use cases and technical options, a morphological analysis was performed to look for a prevalence of unambiguity or variability instead in the solution space. Obviously not all technical solutions would satisfy all use cases alike as there is still a lot of variability; we found, however, strong tendencies to certain solutions that allowed maximum flexibility for most considered cases. For example, we consider the question what kind of video signals might need to be routed from where to where. This includes the question of transfer medium (analogue or digital, fiber or copper, software or hardware) but also scenarios of duplicating signals or having a M:N routing vs. central recoding of video signals (M:1:N). Another field is the provision of network connectivity vs. security concerns; how can we provide most convenient data connections while maintaining cleanly separated
230
E. Dineva et al. / Human Expertise as the Critical Challenge
networks for different work groups and confidentialities? At least, we consider the technical requirements for the availability of computing resources. We have four different preferred solutions for the examined use cases, ranging from locally distributed, remotely distributed, centralized to remote access only. With our technical choice to use the Remote Component Environment (RCE) framework [15], in combination with virtual machines for other services, we could satisfy all requirements. This assessment of technical options led to the first laboratory prototype which is available since 2012 and has seen further iterations due to annual reevaluations. The laboratory rooms were initially opened for internal projects [13]. The IDL is divided into a capacious main design room of about 190 m2, a conference room, a server room, and a catering/communication area “Lounge”. The IDL rooms aggregate to about 440 m2 and are set on the elevated ground floor and, thus, easily accessible.
Figure 4. Initial display setup (left), possible desk arrangements (both sides).
During the first project year, the main prototype display was built from a three-fold divided reflective screen with front projectors, ten working tables on reels with built- in monitors, an arsenal of cables, adaptors and converters, and a video streaming sys- tem (Figure 4). It became obvious, however, that the location of two structurally necessary pillars of the building had adverse effects for seating arrangements within the main room, because they shadowed the line of sight to the main screen for participants in the back area. This can be circumvented by aligning working desks in a relatively narrow U-shape, which is incidentally the most favourable arrangement in most cases anyway (Figure 5). From workshop participants we received feedback that having a large tiltable, rotatable monitor available at every working desk, in addition to the participants’ own portable computers brought along used as primary display was seen as very beneficial during meetings. It restored a equipment level more familiar and similarly equipped as their static office work spaces, but also allowed better technical foundation to discuss details with seat neighbours or within small ad-hoc groups. This two-display setup has the additional benefit sharing only one display via network streaming while at the same time the user’s notebook monitor remains private. The U-shape of table arrangement proved to be used most often and optimizes the physical communication distances. A good viewing angle is important during presentations; this is less an issue for discussion-oriented meetings (and so far we have not received complaints about stiff necks). When the group is split into smaller sub-
E. Dineva et al. / Human Expertise as the Critical Challenge
231
groups, an “island” desk configuration can be used; the disadvantage of it is the slightly bigger effort in setting up power connections. The advantage is better use of available space, since all sides of tables can be used (O-shape), and it’s possible to place more people at the table corners, if not much personal table space is required.
Figure 5. Fully connected movable desk (left), typical workshop situation (right).
The Lounge has successfully been used for catering and socializing during breaks. While keeping the lab itself clean of food and drinks, the “change of scenery” also fosters creativity. After the evaluation phase with the prototype setup, it became obvious that the three- projector divided screen setup was simply too massive and unwieldy for the available lab area, and the divison of signals between three projectors was neither logical nor justified. The immersive, curved screen setup was thus replaced by a segmented display fitted to the room size, consisting of 18 backlit LED mirror projection systems (Figure 6). This enables an overall larger resolution of 8400:2150 pixels with an extremely improved image contrast which spares operators from shading the room from sunlight. We kept the software-based video streaming system that allows placing contents anywhere on any connected displays; for daily use, however, a very simplified and user-friendly wireless hardware appliance for user screen scraping is preferred now due to its better performance and easier setup.
Figure 6. The new main screen of the IDL since 2013 (left), official IDL logo (right).
The working desks have been updated to connect with two separate networks - one for user communication, and another for dedicated video streaming. The latest addition to the previous development phases is the purchase of more powerful computing hardware to consolidate often used simulation software on site. This tremendously improves data transfer speed between scientific codes as well as between user machines, since all tools reside on servers within the same hardware rack. The existing infrastructure of DLR- wide distributed simulation codes will of course continue to offer the same tools for interdepartmental distributed collaboration as before by means of the RCE framework.
232
E. Dineva et al. / Human Expertise as the Critical Challenge
4. Conclusion The steady observation of collaboration processes within the IDL enhances the physical as well as methodological environment for engineers that work there. The systematic experimental research and the application of Lessons Learnt processes support the fundamental understanding of how to improve the IDL as productive working environment. This results in improved understanding, quicker assessments and time reduction. The outcome of iTALENT and its upcoming follow-up projects is a comprehensive manual consisting of a technical system description, best practices and a generic laboratory blue- print for the generation of similar research facilities.
References [1] R.M. Kolonay, A physics-based distributed collaborative design process for military aerospace vehicle development and technology assessment. Int. J. Agile Systems and Management, 7, (2014) in press. [2] B. Nagel, D. Böhnke, V. Gollnick, P. Schmollgruber, A. Rizzi, G. La Rocca, and J.J. Alonso. Communication in aircraft design: Can we establish a common language? In 28th International Congress of the Aeronautical Sciences, ICAS 2012, Brisbane, Australia, 2012. [3] E. Moerland, B. Nagel, R.G. Becker. Collaborative understanding of disciplinary correlations using a low-fidelity physics based aerospace toolkit. In 4th CEAS Air & Space Conference, Linköping, Sweden, 2013. Flygtekniska Förening. [4] B. Nagel, T. Zill, E. Moerland, D. Böhnke, Virtual aircraft multidisciplinary analysis and design processes—lessons learned from the collaborative design project vamp. In The 4th International Conference of the European Aerospace Societies (CEAS), Linköping, Sweden, 2013. [5] E. Dineva, A. Bachmann, E. Moerland, B. Nagel, V. Gollnick, New methodology to explore the role of visualisation in aircraft design tasks: An empirical study. Int. J. Agile Systems and Management, 7, (2014) in press. [6] E. Dineva, A. Bachmann, E. Moerland, B. Nagel, V. Gollnick, Empirical performance evaluation in collaborative aircraft design tasks. In C. Bil, J. Mo, J. Stjepandić (eds.), 20th ISPE International Conference on Concurrent Engineering, pages 110–118, Amsterdam, The Netherlands, 2013. [7] DGP and BDP. Ethische Richtlinien der DGPs und des BDP, 2005. Deutsche Gesellschaft für Psychologie e.V. and Berufsverband Deutscher Psychologinnen und Psychologen e.V. (Retrieved online on December 1st, 2013). [8] D. Böhnke, B. Nagel, V. Gollnick, An approach to multi-fidelity in conceptual aircraft design in distributed design environments, In Aerospace Conference, pages 1 – 10. IEEE, 2011. [9] A. Bachmann, M. Kunde, M. Litz, D. Böhnke, S. König, Advances and work in progress in aerospace predesign data exchange, validation and software integration at the german aerospace center. In Product Data Exchange Workshop 2010, Oslo, Norway, 2010. [10] Project Management Institute, A Guide to the Project Management Body of Knowledge (PMBOK Guide). Project Management Institute, Inc., Newtown Square, Pennsylvania, 2008. [11] G. Probst, S. Raub, K. Romhardt, Wissen managen. Wie Unternehmen ihre wertvollste Ressource optimal nutzen, Dr. Th. Gabler Verlag, Wiesbaden, October 2010. 6. Auflage. [12] K. North, Wissensorientierte Unternehmensführung. Wertschöpfung durch Wissen, Dr. Th. Gabler Verlag, Wiesbaden, January 2005. [13] A. Bachmann, J. Lakemeier, E. Moerland, An integrated laboratory for collaborative design in the air transportation system. In Concurrent Engineering Approaches for Sustainable Product Development in a Multi-Disciplinary Environment, Trier, Germany, Sep 2012. 19th ISPE International Conference on Concurrent Engineering, Springer. [14] Center for applied aeronautical research. http://www.zal.de, 2014. [15] D. Seider, P. Fischer, M. Litz, A. Schreiber, A. Gerndt, Open source software framework for applications in aeronautics and space. In IEEE Aerospace Conference, Big Sky, MT, USA, 03-10 March 2012.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-233
233
Word Segmentation Algorithm on Procedure Blueprint Jianbin LIU1, Duo YAO and LiRui YAO Computer School, Beijing Information Science & Technology University, Beijing, China
Abstract. Word segmentation plays a most important role in the semi-auto conversation process from logic structure to implementation structure. This paper is intended to design a segmentation method for Procedure Blueprint. And this design will be illustrated by the point of long words priority, reverses matching principle and characteristic of Chinese language center rear and the reduction of cross-shaped ambiguity. Meantime, based on the Procedure Blueprint and computer implementation, we are planning to improve the dictionary organization structure, the Segmentation method and the participle efficiency. Keywords. Procedure Blueprint; Segmentation; Dictionary organization; reverse maximal matching
Introduction According to Professor Jianbin Liu, Procedure Blueprint (PB) is a kind of visualized modeling language composed of three-layer external view, two-level mapping and unity internal structure. These views are Abstract Concept Structure Diagram (ACSD), Abstract Logic Structure Diagram (ALSD) and Abstract Implementation Structure Diagram (AISD). In ACSD and ALSD, PB uses natural and limited natural languages to describe concept algorithm, behavior process and program structure of programming language [1]. However, the different i ˈAISD uses programming language operation expression. The semi-auto conversation mainly concludes three steps, natural language segmentation, semantics analysis and conversation. Its detailed process is shown in figure 1.
Figure 1. Semi-automatic conversation flow diagram
1
Corresponding Author
234
J. Liu et al. / Word Segmentation Algorithm on Procedure Blueprint
The main concept and algorithm of these segmentation methods is the basis and key of semantics analysis, by which we can split the natural language [2]. PB field words are relatively concentrated. Meanwhile the reverse matching word segmentation method can be completely realized.
1. Related concepts The Chinese word segmentation is that computer cuts Chinese sentences into a series of independent and meaningful words in accordance with certain words segmentation algorithm. However, the PB Word segmentation is a practical application of Chinese word segmentation technology. Till now, this technology has been deeply researched many times, certainly, the fruitful results has been greatly achieved, which is a very important guidance to our studies. At present, the used word segmentation methods are mechanical segmentation method and segmentation method based on rules and statistics [3]. The mechanical segmentation method algorithm can be simply implemented, and the participle efficiency is high, but it lacks of disambiguation process. The Segmentation method based on rules is difficult to establish and tedious to deal with conflicts and short of flexibility. However, the Segmentation method based on statistics is easier to reach a higher accuracy with a large number of training corpus and segmentation rates related to algorithm and search space. The basic idea of forward maximum matching algorithm is that: assuming the largest words containing in Chinese characters is n, taking the Chinese characters from the pending sentences before n and then [4] [5] matching with the words in the segmentation dictionary. If the dictionary still contains the n, the match is successful and those Chinese characters are cut into a word; picking up n from n+1, and then matching. If these are no such word, the last Chinese character of the n should be removed; and the operation should be repeated until the matching is successful. The basic idea of the reverse maximal matching is similar with forward matching, but their different is that the segmentation of the reverse maximal matching is begun from the end of sentence; if the match is failure, then the first Chinese character of the n should be removed.
2. Segmentation method in PB Word segmentation algorithm based on the statistics is complex and requires of training corpus. Considering there is no such kind of trained corpus, this paper collects the source files of PB in recent years. One of the linguistics experts state that most of PB users is difficult to describe their thinking, and is lack of training corpus, knowledge reserves and professional level. However, the mechanical word segmentation method, which is broadly used in certain domain, is easily implemented and achieved its high efficiency. This paper adopts developed reverse maximal matching mean in segmentation.
J. Liu et al. / Word Segmentation Algorithm on Procedure Blueprint
235
2.1. Dictionary design The structure of word dictionary is very important to the word segmentation efficiency [6]. As for the mechanical segmentation method, the word dictionary is the most important basis for word segmentation and the number of the segmentation words depends on the dictionary vocabulary. The traditional dictionary is built by text, and its data is simply displayed without effective organization [7]. We define the O(n) (n is the number of entries in the dictionary) as the time complexity of search. If the dictionary is well –organized, and then the comparisons times and match times will be reduced in a great extent. This paper adopts the reverse matching method and is accord with the last letter of characters to make the sequence and create the index. For example, the index of “variable( ㊂ )”, “constant( Ᏹ ㊂ )”, “weight( ㊀ ㊂ )” is created as “amount( ㊂ )”. Assuming it has N entries, M is the number of index, and Qi is the number of entries under i word of index. Usually, one time maximum matching times in traditional dictionary is N, and in the worst case, the matching number become M + Qi (N =Q1+Q2+ …+ QM) after creating index. Make the sequence of the entries according to the length of the vocabulary ensures the reduction of matching times of pending vocabulary and dictionary. The dictionary organization is shown in figure 2.
Figure 2. Dictionary Organization.
236
J. Liu et al. / Word Segmentation Algorithm on Procedure Blueprint
2.2. Words segmentation algorithm The above viewpoint expresses the usage of dictionary structure in word segmentation algorithm and the relationship of Organizational structure with Mechanical word segmentation algorithm. The following point will describe the algorithm combined with the above dictionary organization. Defining the Limited natural statement is “W1W2W3…Wn”, we show the segmentation steps as following. (1) Obtaining those sentences which is needed to be analyzed and judging its word strings .If the word strings is NULL, match over ,otherwise ,turns into (2); (2) Obtaining the length of the sentence and naming it as ‘sentenceLenth’, assuming that ‘dictionaryLen’ is number of Chinese characters of the largest entry in dictionary, then taking the smaller one as the length of sentence will be of participle, named as ‘Len’; (3) Initializing the location sign i=n, and getting Wn (that represents the last Chinese character) and assigning ‘Wn’ to Wpos; (4) Judging whether Wpos is a Chinese character, if the answer is true, then (6) will be carried out, otherwise (5) will be carried out; (5) Storing the letters or digits into temporary phrase, i minus one and assign Wi to Wpos; (6) Cutting out ‘Len’ from the last character in the pending sentence, naming as pending ‘initial Word’. Then ‘initialWord’ = W(i-length+1)W(i-length)…W(i). (7) Matching Wpos index with those words ended with Wpos and then using index to quickly locate. If the ‘initialWord’ can be searched in the dictionary, then it is divided into a word, and setting i=i-Len㧘Wpos=Wi, (4) will be carried out. If the ‘initialWord’ can’t be searched in the dictionary, then the first Chinese character in ‘initialWord’ is removed and matching Wpos index with those words ended with Wpos. The option has to be repeated until the correct segmentation is achieved or length of the pending sentence becomes one. (8) Cutting words out, the remaining part of the sentence is regarded as a new one to be processed. If the length of sentence is one, then the word segmentation is finished. Otherwise (1) will be carried out. 2.3. Word segmentation algorithm advantages •
•
Participle efficiency Statistical Word segmentation method needs training corpus support. Its segmentation speed is impacted by the algorithm complexity of time and the overhead expense of space. Mechanical word segmentation method is easily implemented, its segmentation speed is fast, and it is appropriate for certain participles in specific areas. In this paper, we organize the participles dictionary, create index with those vocabularies of the same last character and gather them into a virtual mini-dictionary, and finally push this dictionary become a mini dictionary of unity of N. Comparing with the entire dictionary, this method narrows matching range and time. Ambiguity automatic digestion According to statistical analysis of the typical corpus, there are 6% ambiguity fields. As long as the word segmentation algorithm eliminates the false
J. Liu et al. / Word Segmentation Algorithm on Procedure Blueprint
237
ambiguous, the segmentation will be improved more accuracy. The algorithm refers to this paper is a good way to dispel the crossing ambiguity. Taking ” her hair and clothing is very special(ᅟᄤ⊛থ佄ⵝᓛ․߿)” as an example of segment process. “Kimono()” and “Closing(ⵝ)” constitute the crossing ambiguity. In reverse matching word segmentation method, there is no entry to match “and closing(ⵝ)” in dictionary. Then “and()” is removed, “Closing( ⵝ )” is successfully matched with entry, so taking “Closing(ⵝ)” as a new word. At the same time, the method eliminates overlap type ambiguity.
3. The realization of word segmentation method The article states that the dictionary organization and the thought of word segmentation are based on semi-automatic conversion system. Finally, this paper will realize the method by combining with the system. Considering the segmentation method is applied in conversation system, vocabulary library should contain the PB and programming words. Meanwhile, we took the process of collecting and analyzing the PB programming corpus, and we found that the common corpuses in PB are added into segmentation dictionary, and it can avoid necessary vocabulary becoming an unknown word. Those vocabularies contain digits and variable names, which are not in vocabulary library. Hence, in the segmentation, the character of classes should be judged. It isn’t processed until the character is Chinese character. Continuous non-Chinese characters are processed as a word. As for the implementation, the text adopts hash map storage dictionary to improve matching speed. Realizing that the algorithm on the computer is with java programming language, the class structure of segmentation system is shown in figure 3.
Figure 3. Word segmentation system classes figure schemes
238
J. Liu et al. / Word Segmentation Algorithm on Procedure Blueprint
4. Summary According to Chinese language center rear characteristic, this article draws the reverse matching method to digest crossing ambiguity. Formatting the dictionary in rear Chinese character can narrow the range of searching and improve segmentation speed. But there is no special treatment for ambiguity resolution. At the same time, improving words segmentation methods for library design is very important and they all need further analysis and improvement.
Acknowledgment This research is sponsored by the projects as follows㧦 The Funding Project for Beijing talents training mode innovation experimental zoneSoftware Enguneering; The Beijing Characteristic Specialty Construction Project for Software Engineering.
References [1] Liu Jianbin, Procedure Blueprint designing methodology, Beijing: Science Press, 1,2005 [2] Liu Yaofeng, Wang Zhiliang, Wang Chuanjing. Model of Chinese Words Segmentation and Part-ofWord Tagging, Computer Engineering, 36(4) (2010) 17-19. [3] Liu Hongzhi, Research on Chinese Word Segmentation Techniques, Computer Development and Application, 23(3) (2010) 173-175. [4] Wu Tao, ZahngMaodi, Zhang Chuanbo, Research of Chinese Word Segmentation Algorithms Based on Statistics and Reverse Maximum Match, Computer Engineering & Science, 30(8) (2008) 79-82. [5] Wang Ruilei, Luan Jing, Pan Xiaohua.An Improved Forward Maximum Matching Algorithm for Chinese Word Segmentation, Computer Applications and Software, 28(3) (2011) 195-197. [6] Zhang CaiQin 㧘 Yuan Jian, Improved forward maximum matching word segmentation algorithm, Computer Engineering and Design㧘 (11) (2010) 2595-2597, 2633. [7] Wu Jing㧘Cai Di㧘Wang Zheng, Word processing and application of GIS in natural language queries, Geo-Information Science㧘 7(3): (2005) 67-71.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-239
239
Differentiated Contribution of Context and Domain Knowledge to Products Lines Development German Urrego-Giraldo a and Gloria Lucía Giraldo G. 1 b Engineering Faculty - University of Antioquia – Medellín - Colombia b Mines Faculty – National University of Colombia – Medellín - Colombia a
Abstract. The systematic development and improvement of products is directly related to the quality and management of the involved knowledge. In general, this knowledge is neither organized into specific categories nor treated and represented systematically and consistently. The classification and management of knowledge in terms of two categories: context knowledge and domain knowledge, leads to a systematic and expressive representation of knowledge involved in the development, evolution, and exploitation of solutions (products or systems). The definition of the process concept as the application of a context on a domain allow model and manage process and micro processes in the life cycle of any type of solutions. Context and Domain concepts are extended to consider the specific contributions of each one in the processes of the life cycle of Products Lines. Keywords. context, domain, process, product, product life cycle, Products Lines
Introduction Domain knowledge is always treated as the essential knowledge for the construction and application of software solutions, whereas Context knowledge was considered, generally in the past, as a part of Domain knowledge. For the domain, Ramadour and Cauvet [1] present a formal domain meta-model, introducing the fundamental concepts of object, activity, and goal. There, context is considered as domain knowledge used for the selection of goal components, the organization of activities, and the description of objective. This traditional view of context concept is described, in [2], [3], [4]. In [5] is introduced the separated treatment of context and domain knowledge. With the development of technologies of Information and Communication the concept context becomes an active research field. As it is summarized by Strang and Linhoffs [6], the emergence of adaptive systems, and mobile networks new problems are implicated, which give increasingly importance the concept of context-awareness. Many contributions, for example [7][8], indicate the dynamic of the research in this area. Even in more mature engineering and in the development of tangible products with experienced technologies and materials, deficiencies persist in obtaining and processing knowledge embedded in the early phases of products life cycle. These deficiencies are
1
[email protected].
240
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
also expressed in the difficulties for the incorporation of such knowledge into products and their processes. The objects involved in the agents’ interventions considered in the context belong to a domain. Thus, context models allow differentiate and exploit essential qualities of this knowledge, and potentiate the domain knowledge associated to the domain objects treated in agents’ interventions, in social, economic and natural processes. This differentiated context and domain knowledge characterization and modeling and its potentiation in process of realities is a contribution of the present article. The realization and application of domain and context models are extended in this article to Products Lines, and are illustrated in the support for the development plan of civil engineering services enterprises oriented to innovation. The dynamicity of Product Lines, ubiquitous system, and adaptive solutions (products) demand to differentiate the specialized context knowledge in order to exploit its diversity, variability, and essential qualities in order to manage the integration of context and domain knowledge in processes of the society, economy, organizations, and of the nature. Initially, context-aware products (solutions) in Software Product Lines model in the same characteristic model concepts of context and domain. Our separated models of context and domain allow deep in the treatment of context-aware Product Lines. In [9][10], it is proposed to elaborate separately, domain features models and context features models for complex dynamic solutions. The contribution of this differentiated modeling, and analysis, as well as the integrated treatment and evolution of context and domain knowledge is the subject of the present paper. In our research, the referred concepts related to domain and context knowledge are applied in the evolution and correction of Products Lines, adaptation of academic curricula, analysis of product innovation chains, process management, sustainability, and Requirements Engineering. After the introduction, Section 1 describes the process concept. Section 2 introduces domain and context concepts in the Product Life Cycle. The extension of Product Life Cycle to Products Lines is described in Section 3. Conclusion and Future work are included in Section 4.
1. The Process Concept: The Essence of the Context Agents’ interventions aiming to produce a result, working on physical or conceptual objects belonging to a domain, progressing from some initial or delayed inputs in a collective way, define a process. In this sense a process is an operative context, which means a context applied on a domain aiming to specific results. This process concept introduce operative capacity in context and allow the treatment of knowledge and information associated to interacting agents, physical and conceptual objects, means, methods, all types of resources, and the agents’ interactions themselves. The two knowledge categories used are integrated in the process concept, which support the representation of the society, social communities, and organizations are process with identifiable results. The realities are also expressed as processes. The objects are in reality fragments of processes, micro-processes. An organization represents a big process, which transform some inputs in outputs, executing micro-processes, their business processes.
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
241
The knowledge needed for the elaboration of a plan for research, technological development, and innovation is gathered in the Product Definition Phase as Objectives of social, organizational, and solution (product or system) contexts. This plan was elaborated for a firma of Civil Engineering Services. The mentioned contexts objectives contain the interested agents and their actions and interactions for elaborating of the plan. Figure 1 shows the context model,
Figure 1. SUCM for Elaboration of a Plan for Research, Technological Development and Innovation
242
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
Figure 2. SDSO for Elaboration of a Plan for Research, Technological Development and Innovation
2. Context and Domain Concepts in the Product Life Cycle The knowledge embodied in the product and in the processes of its life cycle is obtained in the product definition phase and is represented, in effect, in a model of context and a model of the domain. From these models, following the adopted product life cycle, a conceptual model, and then the design and construction of the product are made. A context model as a diagram of actions and interactions of internal and external agents is presented. These agent actions and interactions are expressed in a linguistic model based on the case grammar. This allows handling adequately knowledge associated with the agents, their actions and interaction, and with the circumstances associated with agents, and the used means and methods. The adopted domain model organizes concepts related to the product or services in a hierarchy in which each branch is headed by a category of functionalities associated with the product or services. Four relationships among domain concepts are here considered: generalization-specialization, composition or aggregation, characterization, and domain specific relationships. Thus, the head categories of the domain model gather the interrelated knowledge of the components, functions and uses of a product or service.
3. Extension of Product Life Cycle to Products Lines The analysis of the context and domain knowledge may be performed for each phase of the product life cycle since there is a partial or intermediate product of each phase. In fact, software products may be designed to support one or many processes of any phase. Any product of them may be designed and constructed with the knowledge of its domain and its context. This consideration of sub domains and sub contexts allows
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
243
more efficient management of the development chain of products and services, and greater control of time to market and of the quality of the intermediate and final products. Following the software life cycle, the proposed extension for Products Lines, in Figure 3 particularize this arrangement of phases and the processes subdivision keeping the fundaments of Domain Engineering, and Application Engineering treated in Software Products Lines. The first one covers the Preproduction Phases, the Application Engineering I developed in Production Phases, detailed in Figure 3.
Figure 3. Processes of the Life Cycle of Products Lines
The three phase categories: Preproduction, Production and Postproduction phases contain, in turn, three phases following the traditional software life cycle: Product Definition, Product Analysis, Assets and Products Design, Products Production Plan, Elaboration Processes of Assets and Products, Testing and Tuning of Assets and Products, Disposition of Products to Distribution, Products Distributions, and Post Distribution Services and Impact Evaluation. Each one of these Phases is decomposed
244
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
in Figure 3 in three processes. The Preproduction Phases constitute each one a Macro Process, which contains three phases or big Processes, integrated in turn for three Micro processes. Macro processes, Processes, and Micro processes have de same structure, follow the same definition, and represent Macro context, Context, and Micro context, respectively. They are sets of agents’ actions and interaction looking for achieving collective and individual objectives. According to the process concept, Macro process, Process, and Micro process treat Macro domains, Domains, and Micro domains, respectively. Agents of specific level manage information, use resources, means and methods of the correspondent level. The different process levels and their implementation are applied in our present projects, in the treatment of evolution, consistence, identification of defects and causes, and defects correction, in different phases, and processes of Products Lines. 3.1. Processes and Products A process is an operative context, in which the actions and interactions of context agents make to evolution tangible or intangible objects belonging to the domain. The result of a process is a final or intermediate product. In a process chain, the product leaving a process is the input of the next process. Intermediate or final products belong to domain. Throughout the process flow unfinished products. Products embody domain knowledge. Products in any phase of the product production chain aim to satisfy demands expressed in a context, where this product constitute a solution to problems associated to agents’ actions and interactions in that context. Problems are identified in the context whilst solutions incorporate domain knowledge. Products (solutions) incorporate knowledge referred to domain objects. In a Products Line, a Product Production chain, a product lifecycle, etc. the products belong to a domain and are demanded in a context in which the product must contribute to solve problems identified in agents’ actions or interactions. Software Products Line, Other software solutions, and in general in Services in any sectors access domain objects and their associated knowledge in order to deliver an intangible product (solution) to interested agents in a context. The intangible objects and products are supported by “intelligent” agents or are embedded in appropriated means used by these agents, for example computers. 3.2. Contextual Increasing of Value of Means and Objects Means are extension of agents and could be agents in the internal analysis of these means. The disaggregation of activities under responsibility of an extension of an agent implies the disaggregation of the correspondent used means and produces actions under responsibility of agents obtained by decomposition of theses means. In the same sense, any tangible or intangible object may be considered in some moment and circumstance, as an agent responsible of an action or an interaction. Objects are really fragments of processes. This change of the object role, from object to agent, aggregates “intelligence” to the objects, expressed in the assumed responsibilities. Objects “intelligence” is manifested and increased when objects pass of to be regarded as porters of knowledge of a domain, to agents who interact with other agents in a context on objects and their associated knowledge, using appropriated means and methods. In this context the treated domain knowledge acquires meaning and worth, according to the grade of “intelligence” achieved by the interacting agents.
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
245
Otherwise, the means used by agents in their interventions on specific domains have their own domains. Means considered as domain objects may be treated, in turn, by agent actions or interaction in some context. In this sense, in the sector of services, their intangible products (solutions) are ported by tangible or intangible means useful for delivering these services to the interested agents, in particular the users. The own domain el of these means emerges when agents of any context intervene on them. For example, in software products (solutions) two strong interrelated domains appear directly to the software interested agents: the domain of treated knowledge and information objects, and the domain of computers and their supplementary devices. The knowledge of the first domain needs the knowledge of the second for the configuration of a useful and manageable product (solution). The objects of second domain, components of computers and their supplementary devices need to establish a context, where acting and interacting agents (the components of computers and their supplementary devices) foster and give meaning and worth to the knowledge and information objects. The integrated product (solution) is placed in a context (Solution Use Context) formed by interacting agents interested in the development and use of this software product (solution). Services (solutions) imply an important grade of customization, personalization and interaction. For these reasons users are strongly and directly involved and in the development, delivery and use of services. Users are also immersed in both domains, the domain of objects treated in the services, and the domain of means for developing and use of the services. In relation to the tangible products (solutions), in the industrial field, product users are not directly involved in the domains of products and means used in products development and use. In general, users are not implied in tangible products developments, and do not needs special mean for use them. 3.3. Products Evolution Exploiting Context and Domain Concepts Processes make evolve the implied objects. Objects evolution may be explained in terms of radical and improvement agents’ interventions. In both categories, as it is displayed in Figure 4, agents’ interventions are referred to action on the object or to interactions of the object with other agents in the process. When the object is treated on in agents’ interactions, it shows its nature as an object carrying on domain knowledge. The object takes the role of agent, interacting with others agents in a context. In this case, the object exhibits knowledge belonging to the context, and leads the interactions with other agents, as a principal agent or may interact with other principal agents, as interacting agent. This capacity to assume responsibilities interacting with other agents denote certain grade of “intelligence”. Objects gain “intelligence” when changing its role to agent assuming interactions in a context. In Figure 4, the category Actions on the Objet, in columns 1 and 3, contain actions, which make to evolve domain knowledge associated to the object. Actions of column 1 introduce radical changes to objects in their domain, consisting in Aggregation, Substitution, and Elimination of objects. In column 3 the actions treat to improve objects in their domain by Reparation, Adaptation, Strengthening, Maintenance (Revision, Conservation, and Adjustment), Modification, Correction, Updating, and Recovering of these objects. In columns 2 and 4 the objects change their roles in order to interact as agents in a contest. Activities listed in column 2 introduce radical changes consisting in
246
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
Integration, Isolation, and Interference of objects in their context of intervention. Improvement activities considering Activation, Stopping, and Stimulations of interacting agents in their context are presented in column 4. The proposed Evolution Categories are considered in our projects to the development of methods for the consistent evolution of Products Lines, the treatment of defects and corrections, the comparison of different types of assets, correction of defects, and formulation of metrics in all phases of Products Lines.
Figure 4. Evolution Categories of an object
3.4 Augmenting the knowledge of process and Products The richness of context and domain knowledge at different abstraction levels, and their integration in processes of multiple detail levels support the necessary increasing of diversity, specialization and volume of information and knowledge. The introduction of new knowledge in present and future products (solutions) is regarded in terms of domains knowledge and contexts knowledge incorporated in existing or new processes at different abstraction levels. In the products innovation chain, for example, the consideration of multiple process management approaches, as Concurrent Engineering, Lean Production, Cocreation, etc., was referred in [11]. In the introduction of new management approaches, new knowledge or technologies, new concepts, etc., in the innovation chain, was referred to fundamental aspects of the process concept, such as: The addition of particular types of activities, and the essential qualities of agents, objects, and means participating in the process. For incorporating, for example, the sustainability in the development of Software Product Lines, and Product Innovation, a set of essential qualities, represented in Figure 5, was used in [12] for the introduction of contents and competences related to sustainable development, in engineering curricula.
Figure 5. Essential Qualities of Sustainable Development Processes
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
247
3.5 Dynamicity of markets and Technologies The customized and personalized offer and demand, the increasing specialized concurrence, and the emerging trends toward service augment the specific weight that have won other variables of marketing mix in relation to the product. This situation demands the treatment of a growing number of concepts belonging to the context, against the traditional domain concepts. In computer science, in particular, the change of paradigm toward mobile and ubiquitous computing introduces in systems the concept of context-awareness. The preventable growing of concepts to be considered in domain models for the development of Products Lines is meaningful increased by the need of treating a big amount of contextual concepts. It is clear the requirement of applying fragmentation techniques to turn manageable the characteristics models. Additionally to techniques proposed, among other by [13-18], surges the necessity of technique driven to have domain characteristics models a separately of context characteristics models like this exposed in [9]. Our previous work to produce cars datasheets, two characteristic models were elaborated, one for cars and another for cars datasheets. In other projects, one of e-shopping and another of cellular phones the number of contextual characteristics and relationships among them become rapidly to be similar. In these last many branches of contextual concepts appears which illustrate the increase of concepts related to the contracts, the suitability for geographical, cultural, and legal aspects. For shake of space, these figures are not included here.
4. Conclusion and Future Work The definition of the process concept as the application of a context on a domain increases the analysis dimensions of processes, shows the richness and dynamicity of context knowledge, focusing in agents and their actions and interaction the levering of physical and metal resources, instruments, methods, intentions, and decisions. Domain knowledge delimitation facilitates its systematic management, representation and analysis, as well as, the use of models, languages, and analytical resources. The integration of context and domain in the process concept contributes to a formal treatment of knowledge associated to complex and big realities and at all phases of evolution chain of agents and objects. The possibility of deep analysis of process and products, in term of context and domain, in all dimensions of the society, the culture, the nature, and the living beings, supports the improvement and formalization of method for treating the knowledge in these fields. The extension of context and domain concepts to Lines of Products (solutions) gives contains formalization, and support to processes of Product Lines life cycle. The elaborations, and models included in this work are already implemented and used in the construction of comparison of software assets. Ongoing work aims to formalize context and domain concepts and implement context awareness in dynamic Software Products Lines.
248
G. Urrego-Giraldo and G. Lucía Giraldo G. / Differentiated Contribution
References [1] P. Ramadour and C. Couvet. Modélisation de domaine pour l’ingénierie des SI para réutilisation. 12éme journées francophones d’ingénierie des connaissances. (2001). [2] P. Zave, M. Jackson. Four dark corners of requirements engineering. ACM Transactions on Software Engineering and Methodology. 6(1) (1997), 1-30. [3] G. Urrego-Giraldo, G.L Giraldo. Estructuras de servicios y de objetos del dominio: una aproximación al concepto de ontología. TecnoLógicas v.15 (2006), 45-67. [4] V. Plihon, Un environnement pour l’ingénierie des méthodes. Thèse de doctorat à l’université Paris 1, (1996). [5] G. Urrego-Giraldo, ABC-Besoins: Une approche d'ingénierie de besoins fonctionnels et non-fonctionnels centrée sur les Agents, les Buts, et les Contextes. Ph.D. Thesis, Universidad Paris 1, Pantéon Sorbona, 2005. [6] T. Strang, and C.L. Popien, A Context Modeling Survey, UbiComp 1st International Workshop on Advanced Context Modelling, Reasoning and Management. (2004), 31-41. [7] A.K. Dey and G:D Abowd. Towards a Better Understanding of Context and Context-Awareness. Workshop on the What, Who, Where, When, and How of Context-Awareness (2000). [8] W.N. Schilit, A System Architecture for Context-Aware Mobile Computing, PhD thesis, Columbia University (1995). [9] N. Ubayashi, S. Nakajima and M. Hirayama. Context-dependent product line practice for constructing reliable embedded systems. 14th international conference on Software product lines: going beyond, (2010), 1-15. [10] A.K. Dey. Understanding and using context. Personal and Ubiquitous Computing, Special issue on Situated Interaction and Ubiquitous Computing 5, 1 (2001), 4-7. [11] G. Urrego-Giraldo, G.L.Giraldo G, Process Modeling for Supporting Risk Analysis in Product Innovation Chain in: Bil, C. (ed..): Proceedings of 20th ISPE International Conference on Concurrent Engineering (CE2013), Sep, 2 - 5 2013, Melbourne, Australia, IOS Press, Amsterdam, 2013, 469-480. [12] G. Urrego-Giraldo, G.L.Giraldo G. Contextualized achievement of Engineer's competences for sustainable development. Global Engineering Education Conference, (2014), 713 -720 [13] D. Dhungana, T. Neumayer, P. Grünbacher, R. Rabiser, Supporting the Evolution of Product Line Architectures with Variability Model Fragments. WICSA '08. Proceedings of the Seventh Working IEEE/IFIP Conference on Software Architecture. (2008), 327-330. [14] Pleuss, G. Botterweck, D. Dhungana, A. Polzer and S. Kowalewski. Model-driven support for product line evolution on feature level. Journal of Systems and Software, 85, (2012), 2261-2274. [15] S. Alguezaui, R. Filieri, A knowledge-based view of the extending enterprise for enhancing a collaborative innovation advantage, Int. J. Agile Systems and Management, Vol. 7, No. 2 (2014) 116–131. [16] A. McLay, Re-reengineering the dream: agility as competitive adaptability, Int. J. Agile Systems and Management, Vol. 7, No. 2 (2014) 101–115. [17] F. Elgh, Automated Engineer-to-Order Systems A Task Oriented Approach to Enable Traceability of Design Rationale, Int. J. Agile Systems and Management, Vol. 7, No 3 (2014) in press. [18] D. Chang, C.-H. Chen, Understanding the Influence of Customers on Product Innovation, Int. J. Agile Systems and Management, Vol. 7, No 3 (2014) in press.
Part IV Cloud Manufacturing and Service Clouds
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-251
251
A Hierarchical Method for Coupling Analysis of Design Services a
Nan LIa,1, Wensheng XU b, Jianzhong CHAb School of Material and Mechanical Engineering, Beijing Technology and Business University, Beijing, China b School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China
Abstract. In modern time, design is a kind of highly complex activities with a large number of coupling relationships existing among essential design factors and design resources. Traditional research is mainly focused on the internal coupling analysis for design objects and design process, but rarely involves essential design factors coupling and design services coupling. In this paper, a coupling analysis method for design service modeling and execution is proposed. In order to build multi-resolution coupling model in the local and global size, Essential Design Factors Binding Matrix and Essential Design Factors Matrix group will contribute to expressing the inside of the relationships between design services. A hybrid approach, based on partitioning operation, clustering operation and genetic algorithms, will be used to solve the coupling sets for global essential design factors. Finally, the theory, method and technologies mentioned above will be demonstrated on typical complex engineering applications. Keywords. Cloud Manufacturing, Coupling Analysis, Essential Design Factors Matrix, Design Service, Coupling Propagation
Introduction Nowadays, engineering products become increasingly complicated, and the competition pressure reduces the design cycle time. Tighter collaboration among business partners and streamlined integration of engineering processes are becoming essential elements for succeeding in the competitive global environment [1]. Design is a kind of highly complex activities with a large number of coupling relationships existing among essential design factors and design resources. Consequently, designers and engineers are increasingly faced with the challenges of integrating design servicer efficiently. Therefore, extensive research and development works on coupling analysis of design activities have been carried out [2-3], and a lot of enabling technologies have been used in the design decomposition, such as DSM (Design Structure Matrix) [4], DDM (Design Dependency Matrix) [2-3], etc. Traditional research is mainly focused on the internal coupling analysis for design objects and design process, but rarely involves the uncertainty coupling knowledge, essential design factors coupling, and design services coupling. In this paper, a coupling analysis method for design service modeling and execution is proposed. In 1
Corresponding Author:
[email protected]
252
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
order to build multi-resolution coupling model in the local and global size, Essential Design Factors Binding Matrix and Essential Design Factors Matrix group will contribute to expressing the inside of the relationships between design services. A hybrid approach, based on partitioning operation, clustering operation and genetic algorithms, will be used to solve the coupling sets for global essential design factors. Finally, the theory, method and technologies mentioned above will be demonstrated on typical complex engineering applications.
1. The conceptual model of Essential Design Factors Essential Design Factors (EDF) in this paper, is to abstract a large, complex design activity into a number of Independent, more universal concepts. In this paper, the conceptual model of EDF is used to describe general design activity. 5W1H method was employed as an analysis tool for design activity modeling. 5W1H is questions whose answers are considered basic in information-gathering. They are often used in journalism and research. As shown in Figure 1, six questions are used to extract four basic essential design factors: design objective, design method, design object, and design process. Design Activity 5W1H
Essential Design Factors
W hy design?
Design Objective
Design W hat?
Design Object
H ow to design?
Design Method
W hen ? W here?
Design Process
W ho?
Figure 1. Essential Design Factors by 5W1H method
1.1. Relationships between Essential Design Factors Design Objective
Drive
Drive Invoke
Design Process
Drive
Drive
Design Object
Design Method
Design Resourcs
Drive
Design Activity
Figure 2. Relationships between EDF
The execution of design activities depends on the interaction of design method, design object, and design process. The design resources, includes intelligent resources,
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
253
knowledge resources and design tools, play as infrastructure, and is to be invoked by design activity execution. In this process, design objective is the key to make sure how to organize the design procedure, decides what kind of design approach will be used and what the design object look like finally. The relationships between these factors are illustrated in Figure 2. 1.2. EDF-DSM and EDF-BM The Design Structure Matrix (DSM) is a simple, compact and visual representation of a system or project in the form of a matrix, which is becoming a popular representation and analysis tool for system modeling, especially for purposes of decomposition and integration [5]. In this paper, the DSM is used to represent the relationships inside the EDF, named as EDF-DSM. Design Method EDF-DSM a b c d e
Design Process EDF-DSM
Design Object EDF-DSM
1 2 3 4 5 6
f
A
a
1
A
b
2
B
c
3
C
d
4
D
e
5
E
f
6
F
Binding
b c
● ● ●
e f
D
E
F
1 2 3 4 5 6 A
●
B
●
C
●
d
C
Binding
1 2 3 4 5 6 a
B
● ●
Design Process - Design Method EDF-BM
●
D
●
E F
● ● ●
Design Process - Design Object EDF-BM
Figure 3. EDF-DSM and EDF-BM
An EDF-DSM is a square matrix with identical row and column labels. In the example EDF-DSM for design method, shown in Figure 3, elements are represented by the black elements along the diagonal. An off-diagonal mark signifies the dependency of one element on another. In design method EDF-DSM, the labels represent design approaches and technologies which are used during the execution process of specified design activities. In design process EDF-DSM, the labels represent sub step during design process. In design object EDF-DSM, the labels represent subsystem of design object. Let n , q and l be matrix sizes of the three EDF-DSM mentioned above. In this context, the three EDF-DSM representing a design activity are defined as M method ª¬mij º¼ , i 1,2,..., n; j 1,2,..., n , M process ª¬ pij º¼ , i 1,2,..., k ; j 1,2,..., k and M object ª¬oij º¼ , i 1, 2,...,l ; j 1, 2,...,l , where mij ! 0 if design method i depends on design method j and mij 0 if design method i independent of design method j ( pij and oij have the same rules). EDF binding matrix (EDF-BM) is a kind of incidence matrix which represents the relationship between two different EDF-DSM (as shown in Figure 3). The two EDF-
254
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
are defined as Bpm ª¬ xij º¼ , i 1,2,..., k ; j 1,2,..., n and Bpo ª¬ yij º¼ , 1,2,..., k ; j 1,2,...,l , where xij ! 0 if design method j has relationship with design process stage i and xij 0 if design method j independent of design process stage i ( yij have the same rules).
BM
i
1.3. Design services Figure 4 illustrates technology architecture of design services. In this paper, design services will be defined as a three-tier structure: service implementation, service description and service interface. The design resources and design ability will be integrated in service implementation, which includes software and hardware, engineering legacy as design data or design cases, and the intelligence of domain experts. In the description layer, the EDF information which refers to functions of the design service will be defined in details. The service will publish its own ability like using what kind of method to solve the problem, which part or subsystem can be covered, and which stage the service can work during design process. The function of service interface is wrapping a design service as a network object, so that the service can be invoked as a standard component.
Figure 4. Technology architecture of design services
2. Hierarchical method for coupling analysis Proposed in this paper is the coupling analysis with a hierarchical solution strategy for the EDF coupling analysis. Since the EDF description is an important part for design services, then the result of EDF coupling analysis can be employed as a driven force to solve the coupling analysis problem in design services integration. As mentioned above, we use three basic EDF to represent the design activities, the EDF-DSM can be used as a coupling analysis tools. The target of the solving process is to convert the original EDF-DSM to final EDF-DSM with independent blocks (shown in Figure 5). However, because of the relationship between different EDF, it is not reasonable to solve each EDF individually. Then we need to introduce of the influence caused by EDF-BM. Figure 4 illustrates the two steps solving mechanism, which mix two different coupling analysis approaches. In the first step, original EDF-BM will be converted to final EDFBM with independent blocks. After that, the result from step 1 will be used as constrains to build optimization model to solve EDF-DSM decomposition problems. In the last step, coupling analysis solution from each EDF-DSM should be used to drive design services integration process, so that to reduce services communication cost and risk of design flaws.
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
Solution
Design Method-Design Process EDF-BM
Design Process EDF-DSM
Constrain
Decomposition
Design Method EDF-DSM
255
Design Object-Design Process EDF-BM
Decomposition
Design Object EDF-DSM
Constrain
Decomposition by GA
Figure 5. Two steps coupling analysis mechanism for EDF-DSM
2.1. EDF-BM decomposition In this paper, the formal two-phase method presented by Chen et al. [2] is employed as decomposition approach for EDF-BM solving. The key concept and the workflow of the two-phase decomposition method are shown in Figure 6. Phase 1: Dependency analysis
Binary Tree Construction
Binary Tree Branch Association & Binary Tree Association
Original EDF-BM
Phase 2: Matrix partitioning
Final EDF-BM
Figure 6. Workflow of the two-phase decomposition method
After the two phases, coupling analysis result should be generated, and we obtain the following result set to represent the two dimensions decomposition solutions for one EDF-BM (for example EDF-BM of design method and design process): CM {cmi } i 1, 2,...,nm and CP {cpj } j 1,2,..., n p , where CM is the coupling set for design method dimension with the set size nm , CP is the coupling set for design process dimension with the set size n p , the cmi includes the design method label indexes of cluster i , and the cpj includes the design process label indexes of cluster j .
256
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
2.2. Coupling analysis for EDF-DSM by GA EDF-BM decomposition result generation imply that the coupling relationship between two EDF can be fixed, and then the coupling analysis for EDF-DSM can be started. In this paper, Genetic Algorithms (GA) [6] will be used to find optimal EDF-DSM coupling analysis result for a predefined objective. The GA we used starts by creating an initial population of chromosomes. Each chromosome contains a collection of genes each of which represents a complete solution to the EDF-DSM coupling analysis problem. A two dimension binary encoding [4] was used to represent the clustering solution for given EDF-DSM (shown in figure 7). EDF-DSM
Chromosome
1 4 6 5 2 3 7 1 4 6 5 2 3 7
1
Cluster 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 2 3 4 5 6 7
1
Two dimension binary encoding
Cluster 2
Cluster 1 Cluster 2
1 0 0 1 0 1 0 0 1 1 0 1 0 0
Bus Cluster
0 0 0 0 0 0 1
Bus Cluster
Figure 7. Two dimension binary encoding chromosomes
The fitness function measures how good each chromosome’s solution is. In order to build the fitness function, an objective function for one chromosome can be defined as F C D S , where C is contact information flow which is defined in [4], S is clustering similarity function between the specified chromosome and the coupling set of EDF-BM (for example CM above mentioned), and D is the weight coefficient for S . In objective function, we use design method EDF-DSM as an example to explanation the definition of clustering similarity function: nm
S
¦ max( x | x
size(cmi
cmcj ) / size( size i (cmi
cmcj ), ) j 1,2,..., 1,2,.. nm )
i 1
where size means size of set, cmcj is obtained from specified chromosome and includes the design method label indexes of cluster j . In this context, the fitness function of chromosome index h in specified population defined as Ffitness (Fmax F (h)) / ( Fmax Fmin ) where Fmax is the maximum F value in specified population and Fmin is the minimum F value in specified population. The next step of GA is selection. Selection is performed to choose chromosomes that will have their information passed on to the next generation. The third step is crossover and mutation. Chromosome crossover or mutation produces new offspring chromosomes, and they occur according to a user-defined probability. After that, the result will likely be a better solution and may overcome local optima.
3. Case Study Engineering layout design activities of a vehicle engine compartment [7] is employed as a test problem to validate and illustrate the hierarchical coupling analysis method. The EDF details of this application are listed in table 1.
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
257
Table 1. EDF in engineering layout design for vehicle engine compartment Design Methods M1. Case-based reasoning M2. Parallel computing M3. GA M4. Simulated annealing M5. Expert intelligence M6. Empirical formula based Component Selection M7. Simplify modeling M8. Expert system M9. Fuzzy evaluation M10. Speed overlap detection M11. Knowledge base reasoning M12. 3D feature-based modeling
Design Objects O1. Engine O2. Transmission box O3. Oil reservoir O4. Fuel tank O5. Exhaust jalousie O6. Intake jalousie O7. Oil filter for transfer case O8. Propeller shaft O9. Radiator O10. Cooling fan O11. Transfer case O12. Oil filter for actuating chamber O13. Expansion tank O14. Inlet valve O15. Air cleaner
Design Process P1. Layout task generation P2. Requirements gathering P3. Dynamical systems requirements analysis P4. Auxiliary systems Requirements Integrity verification P5. Layout available space modeling P6. Layout Component Selection P7. Non-standard part design P8. Selection result evaluation P9. Layout constrains modeling P10. Layout by domain knowledge P11. Layout optimization P12. Layout evaluation
Figure 8. Couple analysis for EDF design objects
258
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
Figure 9. Couple analysis for EDF design methods
Engineering layout design is given a set of 3D Components of arbitrary geometry in engine compartment (as an available space), find a placement for the components within the space that achieves the design objectives, such that none of the objects overlap, while satisfying optional spatial and performance constraints on the components. As shown in table 1, this design and optimization activity consists of 12 design methods, 15 design objects and 12 design process steps. The purpose of this application is to decouple each EDF-DSM using this coupling analysis method. Then, with the EDF description in service interface, the decoupling solution is used to drive design services decoupling process, so that reasonable service integration solutions can be
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
259
generated. Figure 8 and figure 9 illustrate the couple analysis workflow, results, and the convergence curves for EDF design methods and EDF design objects. In particular, we intend to demonstrate the overall effectiveness of the two step hierarchical method mentioned above. Figure 10 gives the driven mechanism that the design services can be grouped according the couple analysis results. Each service exposes its own EDF features by using service interface, and the services with strong coupling relationship should be recommended to integrate, in order reduce design cost for communication and iteration.
Figure 10. Design services integration by EDF-DSM couple analysis result
4. Conclusions This paper presents a hierarchical method for design services coupling analysis during an integration process. DEF-DSM and EDF-BM are used to represent relationships both inside and outside of EDF. The formal two-phase decomposition method can work on EDF-BM coupling analysis, and the GA is effective on DEF-DSM cluster analysis. The clustering similarity function connects the two levels analysis process together. The solution of coupling set can be used to drive design services integration process, so that the highly coupled services can work together in three different dimensions: design methods, design process and design objects.
260
N. Li et al. / A Hierarchical Method for Coupling Analysis of Design Services
Acknowledgement This work is supported by the National Natural Science Foundation of China (51175033), National High Technology Research and Development Program of China (2013AA041302), and General Program Of Science And Technology Development Project of Beijing Municipal Education Commission (KM201210011007).
References [1] K. J. Kao, C. E. Seeley, S. Yin and R. M. Kolonay. Business-to-Business Virtual Collaboration of Aircraft Engine Combustor Design, Journal of Computing and Information Science in Engineering, 2004, 4:365-371. [2] L Chen, Z Ding, S LI, A formal two-phase method for decomposition of complex design problems, ASME Journal of Mechanical Design, 2005, 127(2):184-195. [3] L Chen, A Macwan, S Li, Model-based rapid redesign using decomposition patterns, ASME Journal of Mechanical Design, 2007, 129(3):283-294. [4] J G Liu, Research on the key technique of integrated management of product structure and development process in concurrent engineering, Nanjing: Nanjing University of Aeronautics and Astronautics, 2006. [5] T R Browning, Applying the Design Structure Matrix to System Decomposition and Integration Problems: A Review and New Directions, IEEE Transactions on Engineering Management, 2001, 48(3):292-306. [6] C Meier, A A Yassine, T R Browning, Design process sequencing with competent genetic algorithms, ASME Journal of Mechanical Design, 2007, 129(6):566-585. [7] N Li, J Z Cha and Y P Lu, A parallel simulated annealing algorithm based on functional feature tree modeling for 3D engineering layout design, Applied Soft Computing, 2010, 10(2): 592–601.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-261
261
Intelligent Utilization of Digital Manufacturing Data in Modern Product Emergence Processes Regina WALLIS a,1, Josip STJEPANDICb, Stefan RULHOFFb, Frank STROMBERGER a and Jochen DEUSEc a Daimler AG, Digital Planning Methods Daimler Trucks, 68299 Mannheim, Germany b PROSTEP AG, 3D Product Creation, 64293 Darmstadt, Germany c TU Dortmund University, Institute of Production Systems, 44227 Dortmund, Germany
Abstract. The application of digital manufacturing tools has been continuously increasing in order to deal with product and process complexity in shortened product lifecycles. The resulting comprehensive digital documentation of the product emergence process provides an opportunity to support concurrent engineering processes. By identifying correlations and recurrent patterns with the aid of data mining techniques, tacit planning knowledge can be revealed and reintegrated into new process planning workflows in order to enhance planning efficiency and facilitate decision making. Based on the classification and clustering of both product and process data and the determination of their respective linkages, this paper presents a novel approach for the knowledge-based support of product emergence processes. Keywords. Digital manufacturing, digital process planning, data mining, concurrent engineering, planning support.
Introduction and Motivation Manufacturing companies are facing the challenge of developing and producing a continuously rising number of product variants in shortened product lifecycles to satisfy customer demands. Especially in assembly planning the resulting process complexity becomes apparent [1] and is difficult to keep under control, as all product and process variants have to be managed at the same time. The application of digital manufacturing tools has been increased in order to approach this complexity. As a result, a digital documentation of product design and assembly planning results is available [2]. These data may include patterns, trends, associations and dependencies [3]. Thus Knowledge Discovery in Databases (KDD) methods can be used to identify these patterns representing tacit planning knowledge. By increasing the transparency in planning processes, planning efficiency can be enhanced and decision making facilitated. In this context, the research and development project “Prospective Determination of Assembly Work Content in Digital Manufacturing (Pro Mondi)” has been initiated to develop a concept using KDD methods to extract useful assembly information from 1
Corresponding Author.
262
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
digital manufacturing databases. Aim of this project is the accurate estimation of the expected assembly work content and the resulting costs of a newly designed product at an early stage of the product emergence process. The approach to achieve this support function comprises a detailed data analysis from the areas of product design and assembly planning. The identification of correlations in the combined datasets allows the product-specific evaluation of expected assembly process complexity, time and cost. The evaluation results can be used to support the assembly planner on the one hand with a first estimation of the required assembly processes as initial starting point for following planning activities. On the other, product development can be supported with information regarding the assembly process complexity for the current product design alternative enabling to optimize the product in terms of assembly quality.
1. Data Mining to Support Product Emergence Processes The approach to support product emergence processes with processed assembly knowledge requires a detailed data analysis from the areas of product design and assembly planning. After a description of the existing data structures in this context, the overall concept of KDD will be highlighted. Yet, the application in the manufacturing field is connected with a number of challenges, which will be depicted at the end of this chapter. 1.1. Product and Process Data in Assembly Planning The product-related input data for assembly planning contains information such as e.g. the shape, size and material of the work piece. It is typically represented by the bill of materials and the 3D shape representations of individual parts. The engineering product structure – represented as a hierarchical tree with the final product being the root node – forms the basis for the systematic storage and management of this data [4], commonly in product data management (PDM) systems. These have been developed in the context of Computer Aided Design to support product design and construction processes. Product Lifecycle Management (PLM) solutions, which focus on the system application alongside the entire product lifecycle including process planning, production and after sales management, are seen to be the enhancements of PDM systems [4]. First support functions for assembly planning have been developed in the context of the Computer Aided Process Planning systems. These are able to generate work plans based on the product description. The required planning knowledge is provided in an organized and formalized way [5], e.g. in the form of “if-then”-rules. However, with an increasing number of rules, these systems often lack transparency and reach their limits with regard to efficiency and maintainability. The idea of an integrated product- and process planning and consequently of the central storage of product, process and resource information has been focused in digital manufacturing systems [6]. Product, process and resource data are stored in separate tree data structures, whose elements are interlinked by references. The databases composing the backbones of these IT tools provide a comprehensive documentation of product design and assembly planning results. They form the basis for the subsequent KDD analysis.
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
263
1.2. Knowledge Discovery in Databases KDD describes the nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data [7]. One of the leading models describing the procedure of knowledge discovery with high practical relevance is the Cross-Industry Standard Process for Data Mining (CRISP-DM), which consists of the six following steps [8]: Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation and Deployment (Figure 1).
Business Understanding
Data Understanding
Data Preparation Deployment
Data Modeling Evaluation
Figure 1. CRISP-DM process steps [8].
During the phase of Business Understanding, both the objectives of knowledge discovery and the nature of the data mining task are defined. In this context, data mining describes the concept of applying data analysis and discovery algorithms to produce a particular enumeration of patterns or models over data [9]. In general, predictive and descriptive data mining tasks can be distinguished. While predictive data mining is applied to find relationships between a dependent (target) variable and the independent variables in the dataset in order to make predictions on new and unlabeled data, descriptive data mining serves to produce understandable and useful patterns describing a complex dataset, yet without any prior knowledge of what patterns exist [10]. Availability, quality and possibilities of access of relevant data are assessed within Data Understanding. In the subsequent phase of Data Preparation, the often timeconsuming transformation of data to a suitable format for data mining is performed. The transformation comprises e.g. the breaking down of hierarchical data structures to flat data tables, the selection of specific attributes or the removal of noisy data. The application of specific data mining techniques to extract patterns from data is performed in the Modeling step. Within the Evaluation phase, results are assessed and interpreted by experts of the future application area. Assuming positive and valid results, the developed model can be deployed in the considered field and consequently be used e.g. for planning- and decision-making support.
264
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
1.3. KDD Challenges in the Context of Product Design and Assembly Planning In the approach to provide a support function for future product emergence processes, several challenges have to be taken into account: The above mentioned heterogeneous data sources contain data characterized by differences in level of measurement and hierarchical structuring. With regard to the data mining activities, the transformation of object-oriented and hierarchical data structures into flat data tables is necessary. A unique identifier is required to map product structures with their corresponding assembly processes. Hereby, a consistent 1:1 mapping of parts to assembly work steps cannot be taken for granted. A model for Knowledge Discovery in Industrial Databases (KDID) has been developed, which incorporates in particular the issues related to industrial databases and the utilization of domain knowledge provided by experts [11].
2. Approach for the Intelligent Utilization of Digital Manufacturing Data The application of KDD including the usage of linked data of different application contexts enables additional benefits for involved departments in the product emergence process. In the following approach assembly planning and product design gain additional and automatically generated information as an input to their particular tasks in form of a support function. 2.1. Provision of a Support Function for Production Planning and Product Design Departments The general concept for the intelligent utilization of digital manufacturing data is illustrated in Figure 2. It contains the automatic identification of similar product design alternatives in existing product data and the retrieval of associated assembly work plans in order to evaluate the complexity and costs of the assembly process. Two use cases have been identified to support both assembly planning and product design departments. The first use case represents a support function for the assembly process planner. As soon as a new subassembly is created by the product design department, the assembly planner can trigger the automated generation of the productspecific assembly process plan. Since the new subassembly already contains characteristic attributes in geometry and model structure, it can be initially classified into a defined subassembly type. These predefined types are provided by a data mining model based on previously designed and analyzed product data. After this step the new subassembly is assigned to a product cluster within the specified assembly type. The identified product cluster is linked to the associated process cluster. A product-processmapping, which has been trained on the linkages between existing subassemblies and their assembly work plans, retrieves the corresponding process cluster for the given new subassembly. For each process cluster a specific assembly work plan template can be generated containing typical operation sequences for this type of process cluster. This automatically generated assembly process represents a first estimation of the assembly plan for the new subassembly still in the product design process. The assembly planner obtains a starting point for his following planning activities with comparably little planning effort. With the help of further manual adjustment and completion, the planner determines the exact assembly process, time and costs.
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
265
Product design
The product design process can be supported by means of the second use case. The required assembly processes for the newly constructed subassembly can be determined as described above. By comparing the resulting assembly process complexity and time of the current product alternative to process complexity and time of previously designed product alternatives, the product designer obtains feedback about the assembly quality, which, in turn, can be used to optimize the current design.
Adapt subassembly creation
Creation of new subassembly Use Case 2 Trigger evaluation of assembly process complexity
Use Case 2: Estimation of required assembly processes, times and costs (and comparison to similar product designs)
Assembly planning
Use Case 1
Trigger first estimation of assembly process plan
Manual adjustment and completion of assembly process Use Case 1: First estimation of required assembly processes as starting point for assembly planning
Data mining
Classification of subassembly type
Model for classification of subassembly type
Classification of subassembly typespecific product cluster
Mapping to product cluster-specific process cluster
Generation of cluster-specific assembly process
Model for classification into specific product cluster
Model for mapping product cluster to process cluster
Process information and assembly work plan templates
Figure 2. Concept for the intelligent utilization of manufacturing data
2.2. Learning of a Data Mining Model to Estimate Assembly Processes The learning of a data mining model to estimate suitable assembly processes for a given product structure is set up in seven main processes. Thereby, four different classification models are created, which are required for the model application in the above mentioned use cases (see Figure 3). In the first step, the hierarchical product structures are transformed into flat data tables and aggregated to subassembly-specific feature vectors. These are used to train the model for the classification of the subassembly type in step 2. Regarding e.g. engine assembly, cylinder head, crankcase and electric generator are just a few examples for subassembly types. Once being differentiated, a clustering algorithm is applied to all variants of each subassembly type in order to find similar subassemblies within one type and to reduce variant complexity (step 3). The resulting cluster assignments serve as target variables for the learning of the classification model into specific process clusters. In the fourth step, the process data extracted from the digital manufacturing system are brought into a suitable form for data mining. Hereby, the textual descriptions of the work plans are
266
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
transformed into word vectors with the help of text mining algorithms. Additionally, building blocks from predetermined motion time systems (PMTS) are used to characterize the operations in detail. These building blocks are machine readable and serve as additional attributes for the subsequent clustering algorithm (step 5). The required process information such as characteristic operation sequences and the assembly time is stored with the process clusters in step 6. Due to the existing single linkages between elements of the product structure and their required assembly processes, which are available in the digital manufacturing system, a mapping model can be computed to assign product clusters containing subassemblies with similar characteristics to their respective process clusters (step 7). Product Data
Model for classification of subassembly type
Model for classification into specific product cluster
1
Product characterization through feature vectors
4
Process characterization through PMTS codes and text vectors
2
Generation of a classification model for the subassembly type
5
Subassembly typespecific clustering of process variants
3
Subassembly typespecific clustering of product variants
7
Storage of process 6 information and assembly work templates
Subassembly type-specific mapping of product and process clusters
Process Data
Process information and assembly work plan templates
Model for mapping product cluster to process cluster
Figure 3. Data mining approach
2.3. Enhanced Data Model for Product and Assembly Process Data An enhanced data model is defined in order to support the KDD process as well as the exchange and persistent storage of linked product and process data. An overview of the data model including attributes relevant for both the analysis and the results is presented in Figure 4. Modern digital manufacturing systems allow the direct assignment of processes to product data. Some of them also provide additional attributes describing the assembly connection, which concern e.g. the modeling of joining or quality control processes. However, these attributes lack the ability to store detailed information regarding the specific assembly connection. Thus, the class ProductAssemblyInformation has been set as a central element in this data scheme and represents an assembly connection realized in the product assembly. References to the corresponding time analysis, assembly requirements, designed parts or products as well as to a wide range of meta data including the assembly department are stored in the data model. The detailed concept for modelling product assembly information can be found in [12]. The class ProductAssemblyInformation is supplemented with attributes of different connection types, such as e.g. screw or welded connections. The class CustomAssemblyInformation allows the instantiation of all possible connection type objects not specifically defined in the data model. Furthermore, the class AssemblyConditionList contains information regarding the assembly situation of the particular assembly connection at the product. This information usually represents tacit knowledge of the production planner and is implicitly included in the resulting assembly process.
267
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
Item represents the second fundamental class in the data model. It contains references to existing subassembly units, geometrical characteristics and further meta data. Each Item refers to ProductAssemblyInformation, which can in turn refer to further Items. This construct is chosen to enable data mining methods to determine exact similarities between new parts and/or products and other existing parts. Furthermore, it allows the comparison between new and existing parts and products in any order and combination. As to represent assembly process data in the data model on hand, parts of another existing data model are used. Application-specific data models have been developed in the research and development project “ADiFa” (Process Harmonization based on Application Protocols). The so-called “ADiFa Application Protocols” offer the integration of processes and time-related data for different digital factory systems [13], [14]. Two basic classes in this data model are OperationDefinition and TimeModulOccurrence, which allow the representation of generic process definitions as well as hierarchical assembly process structures with instantiated operations in a work schedule. The requirement to support the data mining process and to store the data mining models results in a fourth part of the data model. This part comprising the classes ClusterModel and ClassificationModel allows the persistent storage of training sets, the resulting product and process cluster as well as the deduced product-process-mapping. Item
ProductAssemblyInformation
OperationDefinition
Geometry
ScrewAssemblyInformation
Feature
WeldedAssemblyInformation
Meta data
CustomAssemblyInformation
TimeModuleOccurrence ClusterModel
ClassificationModel
ProcessCluster AssemblyConditionList ProductCluster
TrainingSetProcess TrainingSetProduct MappingProductClusterToProcessCluster
Figure 4. Data model overview
3. Case Study on Daimler Data The described concept for the intelligent utilization of digital manufacturing data to support future product emergence processes has been applied to three multi-variant subassembly types of the automotive industry according to the CRISP-DM process steps. RapidMiner 5.3 [15] has been used for the implementation. 3.1. Data Preparation The required input data for the described concept are extracted from the PDM and digital manufacturing system. The hierarchical data structures of both product and process data are transformed into flat data tables first. Regarding product data, relevant characteristics such as the outer dimensions, the weight and the center of gravity for the assembly situation are extracted from the 3D shape representation and stored in a subassembly-specific feature vector. This feature
268
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
vector is enriched with additional information describing the content of the bill of materials, gathered from the various hierarchical levels of the engineering product structure. As to process data, the textual descriptions of the assembly operations are enriched with the detailed itemization of process steps from a predetermined motion time system. In this way, a flat data structure containing detailed information about the assembly operations can be obtained. 3.2. Modeling In order to distinguish between the various subassembly types automatically, a classification model is trained. Therefore, the value differences of the feature attributes for different subassembly types are analyzed and exemplarily visualized in Figure 5. Subassembly type 2
Subassembly type 3
Amount_Designed_Parts Amount_Standard_Parts Heaviest_Part Parts_Per_Subassembly Volume_Part_1 Volume_Part_2 Volume_Part_3 Volume_Part_4 Volume_Part_5 Weight_Part_1 Weight_Part_2 Weight_Part_3 Weight_Part_4 Weight_Part_5 Center_of_Gravity_X Center_of_Gravity_Y Center_of_Gravity_Z Dimension_X Dimension_Y Dimension_Z Screw Sealing Retainer Screw_Nut Sealring Crank Shaft Cable
Subassembly type 1
Figure 5. Visualization of feature vectors
The lines show the mean attribute values for each subassembly type and the deviations are added in order to demonstrate the attribute value ranges. Based on this input data, a naïve Bayes classification model is trained to extract and formalize the differences in the attribute values and to use this knowledge to classify new and unseen subassembly types. Hereafter, a k-means clustering algorithm is applied to the classified data in order to reduce variant complexity. The resulting partition is checked for plausibility with domain experts. Valid subject-specific groups, e.g. concerning the distinct country or operation editions of the subassembly, have been accomplished. The data describing the assembly process operations is clustered with a k-means algorithm in parallel to the clustering of product data. Each of the resulting assembly work plan clusters contains similar assembly processes. The information characterizing the process clusters, e.g. the assembly process steps and the required assembly time is stored with the process clusters in order to be able to generate the work plan templates during model application. In the last step, a mapping function between product and
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
269
process clusters is trained. A naïve Bayes classifier has been chosen to assign the most likely process clusters to the given product clusters. The resulting product-processmapping indicates the matching probability of a process cluster given a certain product cluster and shows a comprehensive connection between these two. 3.3. Evaluation In order to validate the described concept, the total dataset has been evenly divided into a training and a testing dataset. After the training has been performed as described in section 2.2, the consecutive classification models have been applied to the product data of the testing dataset. Due to the modular composition of RapidMiner and the possibility to trigger the execution of the defined data mining processes, the generation of assembly operation sequences could be performed in an automated way. The results generated for the subassemblies of the testing dataset have been compared with the assembly operation sequences actually developed by assembly planners. On average, 87.8% of the assembly operations have been determined correctly. This shows that data mining techniques allow the segmentation of product data in valid subject-specific groups and the subsequent mapping of adequate assembly operations. Yet the worth achieved by the generated assembly operation sequences is directly associated to the quality of the training data: If the classification models are applied to complex product subassemblies and the training data does not comprise comparable datasets, the model will not be able to “predict” the required (complex) assembly operations. In this case the model generates the basic (routine) assembly operations and missing operations need to be added manually. However, the adjusted planning data will be added to the analysis datasets for the recalculation of product and process clusters so that mapping model can be refined with every iteration loop.
4. Conclusion and Further Developments The developed approach for the intelligent utilization of digital manufacturing data provides a new support function for modern product emergence processes. As the presented approach is based on planning data compiled during preceding product emergence processes, products can be evaluated more easily concerning their assembly process complexity, which leads to a faster and easier attainment of planning and construction levels. At present, the realization of the first use case is advanced. The feasibility to segment product data in valid subject-specific groups and to map adequate product-specific assembly operations has been shown. These findings form the basis for the realization of the second use case. The created models as well as the obtained results represent the processing of implicit planning knowledge. The persistent storage in the presented data model helps to enhance transparency and promote reusability. The enhancement of current digital manufacturing systems with the presented data model allows the efficient provision of feedback. By means of a consequent feedback concerning the assembly process complexity, product designers can profit from experiences gained in series production. This in turn can significantly accelerate the overall design and assembly planning process and thereby reduce the number of planning iterations. In order to make the information available to the product designer, a further integration of the support
270
R. Wallis et al. / Intelligent Utilization of Digital Manufacturing Data
function into Computer-Aided Design (CAD) systems, as regular working environment of the product designer, might be contemplated.
Acknowledgements This paper represents the background, objectives and first results of the research project “Prospective Determination of assembly work content in Digital Factory (Pro Mondi)”. This research and development project is funded by the German Federal Ministry of Education and Research (BMBF) within the Framework Concept “Research for Tomorrow’s Production” (funding number 02PJ1110) and managed by the Project Management Agency Karlsruhe (PTKA). The authors are responsible for the contents of this publication.
References [1] H. Bley, C. Zenner, Variant-oriented Assembly Planning, CIRP Annals Manufacturing Technology55 (2006), 23-28. [2] O. Erohin, P. Kuhlang, J. Schallow, J. Deuse, Intelligent Utilisation of Digital Databases for Assembly Time Determination in Early Phases of Product Emergence, Procedia CIRP - 45th CIRP Conference on Manufacturing Systems3 (2012), 424-429. [3] H. Al-Mubaid, E.S. Abouel Nasr, A.K. Kamrani, Using data mining in the manufacturing systems for CAD model analysis and classification, Int J Agile Systems and Management2 (2008) 1/2, 147-162. [4] M. Eigner, R. Stelzer, Product Lifecycle Management – EinLeitfadenfür Product Development und Lifecycle Management, Springer, Berlin, 2009. [5] P. Mognol, B. Anselmetti, Evaluation Criteria: A Method to Represent and Compute Technological Knowledge in CAPP, IEEE Symposium on Emerging Technologies and Factory Automation2 (1995), 419-426. [6] Digital Factory – Fundamentals. Düsseldorf: VDI; 2008. [7] U. Fayyad, G. Piatetsky-Shapiro, P. Smyth, From Data Mining to Knowledge Discovery in Databases, AI Magazine17 (1996), 37-54. [8] P. Chapman, J. Clinton, R. Kerber, T. Khabaza, T. Reinhartz, C. Shearer, R. Wirth, CRISP-DM 1.0 – Step-by-step data mining guide, SPSS, 2000. [9] U.Fayyad, G. Piatetsky-Shapiro, P. Smyth, From Data Mining to Knowledge Discovery: An Overview, Advances in Knowledge Discovery and Data Mining, AAAI Press,Mensolo Park, 1997. [10] K. Wang, Applying data mining to manufacturing: the nature and implications, J IntellManuf18 (2007), 457-495. [11] D. Lieber, O. Erohin, J. Deuse, Wissensentdeckung im industriellen Kontext, Zeitschrift für wirtschaftlichen Fabrikbetrieb 108 (2013), 388-393. [12] M. Eigner, J. Ernst, D. Roubanov, J. Deuse, J. Schallow, O. Erohin, Product Assembly Information to Improve Virtual Product Development, Proceedings of the 23rd CIRP Design Conference - Smart Product Engineering (2013), 303-313. [13] D. Petzelt, J. Schallow, J. Deuse, S. Rulhoff, S., Anwendungsspezifische Datenmodelle in der Digitalen Fabrik, ProduktDaten Journal16 (2009), 45-48. [14] J. Schallow, K. Magenheimer, J. Deuse, G. Reinhart, Application Protocols for Standardising of Processes and Datain Digital Manufacturing, Enabling Manufacturing Competitiveness and Economic Sustainability - Proceedings of 4th CIRP Conference on Changeable, Agile, Reconfigurable and Virtual Production (CARV2011),Springer, Berlin / Heidelberg / New York, 2011. [15] I. Mierswa, M. Scholz, R. Klinkenberg, M. Wurst, T. Euler, YALE: Rapid Prototyping for Complex Data Mining Tasks, Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2006), 935–940.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-271
271
A Computing Resource Selection Approach Based on Genetic Algorithm for InterCloud Workload Migration a
Tahereh NODEHIa, Sudeep GHIMIREb and Ricardo JARDIM-GONCALVESc Departamento de Engenharia Electrotecnica, Faculdade de Ciências e Tecnologia da, Universidade Nova de Lisboa (UNL), Campus de Caparica, Portugal b CTS,UNINOVA, Departamento de Engenharia Electrotecnica, FCT,UNL, Portugal c CTS,UNINOVA, Departamento de Engenharia Electrotecnica, FCT,UNL, Portugal
Abstract. Cloud computing has been one of the most important topics in IT which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. However, current cloud systems do not fully support inter-cloud interoperability and require more research work to provide sufficient functions to enable that seamless collaboration between cloud services. This paper proposes an efficient model for selecting appropriate computing resource from multi-cloud providers that is required to achieve inter-cloud interoperability in a heterogeneous Infrastructure as a Service (IaaS) cloud environment. The goal of the model is dispatching the workload on the most effective clouds available at runtime offering the best performance at the least cost. We consider that each job can have six requirements: CPU, memory, network bandwidth, serving time, maximum possible waiting time, and the priority based on the agreed Service Level Agreement (SLA) contract and service price. Additionally, we assume the SLA contract with suitable criteria between cloud-subscriber and multiple IaaS cloudproviders is signed beforehand. This computing resource selection model is based on Genetic Algorithm (GA). The resource selection model is evaluated using agent based model simulation. Keywords. Cloud Computing, Inter-Cloud Interoperability, Workload Migration, Infrastructure as a Service (IaaS), Model Driven Architecture (MDA) and Service Oriented Architecture (SOA)
Introduction Today, a global cloud system includes heterogeneous clouds with finite physical resources. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities [1][2]. Cooperation between the heterogeneous cloud service vendors can provide better QoS (eg. scalability and reliability, service availability and performance), avoidance of vendor lock-in, and reduced service production costs. It also can support inter-cloud resource sharing and can provide cloud users the ability of using combined services from different service providers. The required seamless interworking mechanism between clouds is called “Inter-cloud Interoperability”. Most
272
T. Nodehi et al. / A Computing Resource Selection Approach Based on Genetic Algorithm
of the current cloud environments do not support inter-cloud interoperability and more research work is required to provide sufficient functions to enable global seamless collaboration between cloud services [3]. Considering our research work on an Inter-cloud Interoperability Framework (IIF), this paper presents job scheduling model for dispatching the workload from IaaS Cloud Subscriber on other available IaaS Cloud Providers. The purpose of the job scheduling model is to reduce the queuing time and improving the QoS at the lowest cost. The model uses Genetic Algorithm (GA) [4] for job scheduling and considers some suitable criteria for Quality of Service (QoS) and Service Level Agreement (SLA) for cloud systems. Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) are identified as two appropriate approaches for implementing the model in the framework. This paper includes four sections. The first section discusses on the state of the art in inter-cloud interoperability, section two presents Inter-cloud Computing Resource Selection Approach, section three presents a short introduction on evaluation method, and final section concludes the paper.
1. Inter-cloud Interoperability The inter-cloud concept is based on the fact that each single cloud has limited computing resources in a restricted geographic area [5][6][7]. Inter-cloud requires interoperability between various cloud computing instantiations allowing cloud costumers to migrate in and out of the cloud and switch between providers based on their needs, without a lock-in which restricts customers from selecting an alternative provider. The inter-cloud network scenario is still in an early stage. Celesti in 2010 [8] proposed a three-phase (discovery, match-making, and authentication) cross-cloud federation model. It has been claimed in [9] Point to Point protocols are not appropriate for inter-cloud protocols and accordingly many-to-many mechanisms have been proposed such as Messaging and Presence Protocol (XMPP) for transport, and Semantic Web techniques such as Resource Description Framework (RDF) as a method to specify resources. In order to show the distinctive ways of interaction between cloud users and providers, NIST [10] defined following use cases for Cloud Computing Interoperability: x Copy Data Objects between Cloud-Providers. x Dynamic Operation Dispatch to IaaS Clouds. x Cloud Burst from Data Center to Cloud. x Migrate a Queuing-Based Application. x Migrate Virtual Machines (VMs) from One Cloud Provider to Another. Nagireddi and Mishra [11] proposed an ontology based framework for searching services provided by different Cloud Service Providers. Abouzamazem and Ezhilchelvan [12] studied tolerating outages by inter-cloud replication and proposed an approach to replicate a service on N outage-independent clouds. Pop and colleagues [13] presented a genetic scheduling algorithm for independent tasks in inter-cloud environments where the selection phase is based on reputation evaluation. Finally, Demchenko and colleagues [14] presented their on-going research on developing the IIF to support on-demand provisioning by heterogeneous cloud service providers.
T. Nodehi et al. / A Computing Resource Selection Approach Based on Genetic Algorithm
273
Nevertheless, from the analyzed state of the art, there is not yet a comprehensive proposal that support the inter-cloud interoperability concerns. We are working on an Inter-cloud Interoperability Framework (IIF) that can support inter-cloud interoperability for dynamic operation dispatch to IaaS Cloud Providers (CP). A fundamental module for the IIF framework is the Computing Resources (CR) selection module from available IaaS CPs. This paper discusses on the CR selection module that uses GA to select the most appropriate CRs from multi CPs.
2. Inter-cloud Computing Resource Selection Approach As discussed in previous section, we are working on an IaaS IIF framework that will support inter-cloud interoperability for IaaS clouds. The IIF framework’s focus is invoking the operations dynamically on the most adequate CPs available based on a the application requirements that are evaluated at runtime. The conceptual model for the IIF framework for IaaS CPs [15] is shown in Figure 1. The IIF framework is for operations that are independent of unique resources of the IaaS Cloud Subscriber (CS). The IaaS CS attempts to run a job on the CP that is able to provide the best performance, at the least cost. The CS opens an account with each discovered IaaS CP based on CP’s SLA and CS has the list of charges and QoS promises of each CP. Then the CS considers a test workload, with specified CPU power, and memory or network performance requirements. The CS operates the test workload a few times on each CP, to arrange the CPs by availability, and performance and price aspects. Moreover, the CS evaluates the CPs for the price and QoS metrics such as availability, and forwards the workloads accordingly. All data and model transformation and mapping tasks between CS and CPs are happening through IIF. In addition, the IIF framework requires one module to select the most effective CRs from IaaS CPs. This section discusses on the CR selection approach. Applications
IaaS Cloud Provider n
Intercloud Interoperability Framework (IIF) Model Manager
IaaS Cloud Subscriber
IaaS Cloud Provider 2
Transport Infrastructure
…
QoS and SLAs Repository
Process Executor
VM provisioning
IaaS Offering Profiles Repository
Computing Resource Discovery Computing Resource Selection Transformation Engine
Job Scheduler
Job-selection Module
DataCenter Mngmt
Edgelets Mngmt
Object Storage
CloudProxy (I2ND)
Service Mngmt
Monitoring
Semantic Module
IaaS Cloud Provider 1
Task Scheduling Task Results Collecting
Intercloud Interface
Figure 1. Inter-cloud Interoperability Framework (IIF).
274
T. Nodehi et al. / A Computing Resource Selection Approach Based on Genetic Algorithm
2.1. Job Model Considering Figure 1, the input of IIF framework from CS is a finite set ܬൌ ሼ݆ ȁ݅ ൌ ͳǡ ǥ ǡ ݊ሽ of jobs ݆݅ . The job production is dynamic and each job ݆݅ is based on the specified requirements of applications. Each ݆݅ has a set of requirements ܴ ൌ ሼݐ ǡ ܿ ǡ ܾ ǡ ݉ ǡ ݀ ǡ ሽ which ݐ is serving time, ܿ is computing power requirement, ܾ is bandwidth requirement, ݉ is memory requirement, ݀ is maximum possible waiting time, and finally is priority based on the agreed SLA contract and service price. In the evaluation section, the possible choice for ܿ ǡ ܾ and ݉ for the case in this paper are specified. The job arrivals are Markovian (modulated by a Poisson process). The paper considered the serving time ݐ has the general distribution, and there are n CPs that may offer the appropriate requirements of the job ݆ . Job-Selection Module selects the jobs from waiting queue in CS, considering the deadline ݀ of job ݆ is longer than network delay to get service from other CPs. In short, the paper considers m/g/n queue for the modeling the inter-cloud environment. Additionally, the paper considers the IIF framework supports appropriate functions for IaaS inter-cloud interoperability. 2.2. Genetic Algorithm based Resource Selection (GARS) In this paper, it is assumed IIF framework receives the workload from CS, and provides the required object model, operation model, and data model of each job ݆ . Additionally, it is considered the IIF identifies the QoS parameters ( ݐ ǡ ܿ ǡ ܾ ǡ ݉ ǡ ݀ǡ ) for the requirements of each job ݆ . Moreover, the SLA criteria between the CS and other IaaS CPs and user profiles are identified in IIF. The Genetic Algorithm based Resource Selection (GARS) approach has following steps: x The first step in GARS approach is identifying the available CPs which meet the current work-flow requirements. To provide this functionality, GARS exploits the information offered by IIF framework. x The second step, the main focus of this paper, is dispatching the workload on the available CPs effectively. The job allocation method is based on iterative Genetic Algorithm [4]. Figure 2 shows the GA based model for distributing the jobs receiving from IIF framework on Cloud Providers. Defining an applicable fitness function is essential and having strong effect on the convergence rate of GA and achieving the optimal solution. This paper considered two main factors to define the fitness function: 1. Each IaaS CP’s performance: The framework allocates a performance history variable ݄ to each IaaS Cloud Provider ܲܥ . IIS framework sends a test workload to each Cloud Provider ܲܥ periodically and updates the performance variable ݄ according to the ܲܥ ’s resource availability, ܲܥ ’s response time, and CPU throughput. 2. The cost: The IIF framework has the SLA repository based on the agreement between CS and CPs that includes the price lists for different computing resource offering. The ܿݐݏ is the cost of computing resource offering from Cloud Provider ܲܥ for the requirement of job ݆ .
T. Nodehi et al. / A Computing Resource Selection Approach Based on Genetic Algorithm Keep n pattern with the bests fitness evaluation results No
Begin
iteration =< n
yes Inputs: 1- A queue of Jobs with priorities Each job i with set of Requirements : {ti, cpi, bi, mi, di, pi} 2- The dynamic performance history of each CP
Randomly distribute first x jobs from queue on the CPs (if possible) save the distribution pattern Evaluate the fitness function for distribution patterns and save the results
1- Do the crossover operation for co times 2- Distribute x jobs on the CPs for each crossover pattern 3- Evaluate fitness function for each distribution 1-Do the mutation operation for mo times and 2- Distribute x jobs on the CPs for each mutation pattern 3- Evaluate fitness function for each distribution
275
end yes
The queue is empty
No
Distribute the jobs based on the pattern with bests fitness evaluation function yes
Average queuing time =< acceptable queuing time No
Figure 2. The GA based model for distributing jobs on Cloud Providers.
3. Evaluation The evaluation of proposed Computing Resource Selection Approach is through agent based simulation. In the simulation process, there are three types of agents: x An agent for Cloud Subscriber : CS agent has number of computing resources including SingleCore, DualCore, QuadCore, and OctoCore processors with variety of attached RAM and different network bandwidth speed. It is possible to provide different combination of available resources that are specified in the SLA of Cloud Subscriber. x Predefined number of agents for Cloud Providers: Each CP agent is specified by different service combinations and their prices. In addition there is a performance history variable ݄ for each IaaS Cloud Provider ܲܥ agent. Agents for Jobs produced with the rate of Poisson distribution. The workload characterization is based on selected computational tasks of construction industry. Specification for Cloud hosting is based on the infrastructure for FITMAN1 project. The overall simulation is modeled within the scope of the scenarios being implemented by UNINOVA and CONSULGAL2 for FITMAN project. According to the simulation results, following achievement are possible using IIF framework with GARS approach: x Reduction in waiting queuing time x Better Quality of Service x Cost Reduction x Resource Sharing In this paper we just show the simulation results for reduction in waiting time. The job-selection module, selects the jobs from waiting queue in CS that are not depend on a specific resource of CS. Additionally the deadline ݀ of each selected job ݆ is longer than network delay to get service from other CPs. Figure 3 (a) shows the number of jobs, blue line, waiting to get CRs in a single cloud environment are increasing during the time. With similar configuration, Figure 3 (b) shows the number of jobs waiting to get CRs in a multiple cloud providers environment (one CS and four CPs) are close to zero.
____________________________________________________________________ 1
Future Internet Technologies for MANufacturing industries : http://www.fitman-fi.eu/
2
http://www.consulgal.pt/en/
276
T. Nodehi et al. / A Computing Resource Selection Approach Based on Genetic Algorithm
Figure 3. (a) The simulation results for single cloud provider environment that shows the number of jobs waiting to get resources is increasing. (b) The simulation results for multi-cloud provider environment.
4. Conclusion Current cloud systems do not fully support inter-cloud interoperability. An effective computing resource selection method is required to achieve inter-cloud interoperability for IaaS service cloud providers. Based on our conceptual model for Inter-Cloud Interoperability (IIF), this paper discuss on a genetic algorithm based computing resource selection approach that can be exploit for interoperability in dynamic operation dispatching for IaaS clouds. The resource selection approach has two steps: first, identifying the available CPs which meet the current work-flow requirements, and second, dispatching the workload on the available CPs effectively. The job allocation method is based on GA and considering two main factors to define the fitness function: The performance of each IaaS CP that is measured according to the CP’s resource availability, response time, and CPU throughput, and the minimum cost. The evaluation process used the agent based mode which adds dynamic workload to the multi cloud providers’ environment and dispatches the waiting workload from CS to CPs. The simulation results show the waiting time to get required resources reduces to very small number when there are 4 CPs compare to a single cloud environment. MDA and SOA approaches are identified as appropriate approaches to develop the IIF framework.
T. Nodehi et al. / A Computing Resource Selection Approach Based on Genetic Algorithm
277
References [1] R. Jardim-Goncalves, K. Popplewell, and A. Grilo, “Sustainable interoperability: The future of Internet based industrial enterprises,” Comput. Ind., vol. 63, no. 8, pp. 731–738, 2012. [2] C. Coutinho, A. Cretan, and R. Jardim-Goncalves, “Sustainable interoperability on space mission feasibility studies,” Comput. Ind., vol. 64, no. 8, pp. 925–937, Oct. 2013. [3] R. Jardim-Goncalves, C. Agostinho, F. Lamphataki, Y. Chalarabidis, and A. Grilo, “Systematisation of Interoperability Body of Knowledge: The foundation for EI as a science,” Enterp. Inf. Syst. J., 2012. [4] D. Goldberg, Genetic algorithms in search, optimization, and machine learning. Addison-Wesley Professional, 1989, p. 432. [5] D. Bernstein, “The Intercloud : Cloud Interoperability at Internet Scale,” Sixth IFIP Int. Conf. Netw. Parallel Comput., 2009. [6] D. Bernstein, E. Ludvigson, K. Sankar, S. Diamond, and M. Morrow, “Blueprint for the Intercloud Protocols and Formats for Cloud Computing Interoperability,” 2009 Fourth Int. Conf. Internet Web Appl. Serv., no. Mdi, pp. 328–336, 2009. [7] A. Parameswaran and A. Chaddha, “Cloud interoperability and standardization,” SETlabs briefings, 2009. [8] A. Celesti, F. Tusa, M. Villari, and A. Puliafito, “How to Enhance Cloud Architectures to Enable CrossFederation,” IEEE 3rd Int. Conf. Cloud Comput., Jul. 2010. [9] D. Bernstein and D. Vij, “Using XMPP as a transport in Intercloud Protocols,” 2nd USENIX Work. Hot Top. Cloud Comput., 2010. [10] L. Badger, R. Bohn, R. Chandramouli, T. Grance, T. Karygiannis, R. Patt-Corner, and J. Voas, “Cloud Computing Use Cases,” National Institute of Standards and Technology,, 2010. [11] V. S. K. Nagireddi and S. Mishra, “An ontology based cloud service generic search engine,” in 2013 8th International Conference on Computer Science & Education (ICCSE), 2013, pp. 335–340. [12] A. Abouzamazem and P. Ezhilchelvan, “Efficient Inter-cloud Replication for High-Availability Services,” in IEEE International Conference on Cloud Engineering (IC2E), 2013, pp. 132–139. [13] F. Pop, V. Cristea, N. Bessis, and S. Sotiriadis, “Reputation Guided Genetic Scheduling Algorithm for Independent Tasks in Inter-clouds Environments,” in 27th International Conference on Advanced Information Networking and Applications Workshops, 2013, pp. 772–776. [14] Y. Demchenko, C. Ngo, C. de Laat, J. AntoniGarcia-Espin, S. Figuerola, J. Rodriguez, L. M. Contreras, G. Landi, and N. Ciulli, “Intercloud Architecture Framework for Heterogeneous Cloud Based Infrastructure Services Provisioning On-Demand,” 27th Int. Conf. Adv. Inf. Netw. Appl. Work., 2013. [15] T. Nodehi, S. Ghimire, and R. Jardim-goncalves, “Toward a Unified Intercloud Interoperability Conceptual Model for IaaS Cloud Service,” in International Conference on Model-Driven Engineering and Software Development (MODELSWARD 14), 2014. [16] “FI-WARE : FUTURE INTERNET Core Platform,” Funded by: Seventh Framework Programme (FP7) and European Commission. [Online]. Available: http://www.fi-ware.eu/. [17] A. Cretan, C. Coutinho, B. Bratu, and R. Jardim-Goncalves, “NEGOSEIO: A framework for negotiations toward Sustainable Enterprise Interoperability,” Annu. Rev. Control., 2012. [18] OMG, “MDA Guide Version 1.0.1,” no. June, 2003. [19] A. M. Jimenez, “Change propagation in the MDA: A model merging approach,” The University of Queensland, 2005. [20] OMG, “Meta Object Facility ( MOF ) 2 . 0 Query / View / Transformation Specification,” 2011. [21] A. Agrawal, A. Vizhanyo, Z. Kalmar, F. Shi, A. Narayanan, and G. Karsai, “Reusable Idioms and Patterns in Graph Transformation Languages,” Electron. Notes Theor. Comput. Sci., Mar. 2005. [22] OMG, “OMG Meta Object Facility ( MOF ) Core Specification (Version 2.4.1),” 2011. [23] IBM, “SOA,” 2008. [Online]. Available: http://www-01.ibm.com/software/solutions/soa/. [24] S. Güner, “Architectural Approaches, Concepts and Methodologies of Service Oriented Architecture,” Technical University Hamburg Harburg, 2005.
278
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-278
Research on Software Resource Sharing Management in Collaborative Design Environment Based on Remote Virtual Desktop Wensheng XUa,1, Nan LI b, Hong TANGa, Jianzhong CHAa School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China b School of Material and Mechanical Engineering, Beijing Technology and Business University, Beijing, China a
Abstract. For complex product development, different experts from different domains may use various engineering software tools to collaborate in the product development process in a distributed environment. An engineering software resource pool needs to be built to facilitate efficient software resource sharing and management in the collaborative design process. Engineering software tools can be encapsulated either as SOA services through SOA based technologies, or as interactive desktop services through remote virtual desktop technology according to their different interfaces and requirements. For interactive software resources, a software resource sharing conceptual model is analyzed in this paper, and a software resource sharing framework is proposed for collaborative design based on the remote virtual desktop technology. Based on Ulteo – an open-sourced remote virtual desktop platform, a software resource sharing platform is designed and implemented for high-speed train axle lightweight design, in which relevant software tools are managed and provisioned as remote desktop interactive software services and can be shared and accessed in a distributed environment. Through the software resource sharing platform, users can customize their work desktop with different software resources according to their own demands, and various resource sharing modes can be well supported. Keywords. Collaborative Design, Software Resource Sharing, Interactive Software, Remote Virtual Desktop
Introduction For complex product development, design tasks and multi-functional teams may be organized according to the product structural tree, and in each multi-functional team different experts from different domains may use various engineering software tools to collaborate in the product development process in a distributed environment. A large amount of engineering software tools may be involved in the development process, for example, a 3-D modelling tool – Pro/E, a meshing tool – HyperMesh, a finite element analysis tool – ANSYS, etc. An engineering software resource pool needs to be built to 1
Corresponding Author:
[email protected]
W. Xu et al. / Research on Software Resource Sharing Management
279
facilitate efficient software resource sharing and management in the collaborative design process, so developers from distributed locations can easily and conveniently access and use those engineering tools and transfer intermediate data among team members. With the support of an engineering software resource pool, there is no need to install engineering software tools locally for each developer in the product development process, and the utilizing efficiency of all the software tools can be well enhanced. A resource cloud pool is the basis for the implementation of cloud manufacturing [1,2], and it can effectively enhance the utilizing efficiency of manufacturing software resources. In the resource cloud pool, resources should be encapsulated as discoverable, usable services through servicing technology or virtual technology [3,4], so the heterogeneous structures of the resources and their underlying environment can be hidden from end users, resources can be well shared in the cloud manufacturing environment. Software resources are located at the bottom level of the whole cloud manufacturing architecture, and they need to be encapsulated and shared according to their different interfaces and user requirements. There are basically 3 types of software interfaces for engineering software tools: command line interface, API interface, and interactive user interface. Software interfaces and user requirements will determine how a software resource can be encapsulated, shared and presented in the design process. For command line interfaces and API interfaces, the software resources can be encapsulated as SOA-based services, then other applications can conveniently invoke these services as needed, as shown in our previous work [5,6]. For interactive user interfaces, software resources can be encapsulated and presented in the form of remote virtual desktop, so the complete features of the software tools can be presented for users. Users can interact with these software services remotely as if the applications are located locally. Currently there are 2 types of virtual desktop solutions: Virtual Desktop Infrastructure (VDI) and Server-based Computing (SBC) (also called remote desktop services or presentation virtualization) [7]. A VDI solution provides full desktops for remote end users, each user with an individual operating system instance. SBC sessions run in a single shared server operating system and can provide connections to either individual applications or the full desktop as needed. Now there are two main commercial software solutions for virtual desktops in the market: VMware products and Citrix products. VMware mainly provides the VDI virtual desktop solution, focusing on enterprise level applications based on data centers. Citrix XenDesktop is an SBC virtual desktop solution and focusing on usage on virtualization on terminals [8]. These two types of virtual products have limited open further development API, so small and medium-sized enterprises have difficulties in further development and customization according to their different requirements and resource share modes; furthermore, the prices of their products are too costly for small and medium-sized enterprises. To implement interactive software resource sharing and on-demand services in cloud environment in an affordable and flexible way for small and medium-sized enterprises, a resource sharing concept model is proposed based on virtual technology in this paper, and a software resource sharing framework based on virtual technology is proposed. An open-sourced virtual desktop platform – Ulteo [9] is adopted to construct and develop the software resource sharing platform, so interactive software services are encapsulated and presented with virtual technology. All functions of the software resources can be presented to users, and software resources either in Linux or in
280
W. Xu et al. / Research on Software Resource Sharing Management
Windows environment can be easily encapsulated, shared and integrated in the platform, and services can be provided for small and medium-sized enterprises in an on-demand fashion.
1. The conceptual model of software resource sharing Software resources are indispensable resources in enterprise business processes, and they are distributed in different enterprises and different regions, or in different departments in an enterprise, and they have some unique characteristics, including being able to be virtualized, copied and shared. A specific software resource can be denoted as sri, a software resource set can be denoted as SR={sr1, sr2, sr3, …, srn}. Ju et al. proposes a software resource sharing model based on distributed network model, composed of location set, software resource domain set, software location set, software resource set and sharing relations set [10], but in that model the relations between the software resources and the users are not clearly defined and cannot clearly describe all possible sharing modes for users. To effectively and clearly express the sharing relations of software resources, a software resource sharing conceptual model (SRSCM) is proposed in this paper, as shown in Figure 1. A SRSCM is a quintuple set -SRSCM={P, SR, SRZ, S, UV}, in which P, SR, SRZ, S and UV refer to location set, software resource set, sharing resource zone, sharing relation set and user view set respectively:
Figure 1. The conceptual model of software resource sharing.
(1) A location set includes the locations of software resources in an enterprise or in a department. Let pi denote a location, then a location set P= {p1, p2, …, pn}. (2) A software resource set is the set of software resources needed in manufacturing activities. Let sri denotes a software resource of some type, then software resource set SR={sr1, sr2, sr3,…, srn}. Each sri is located in some location pj and can have one or several software resource instances (SRI), and each SRI is a running instance of the software resource. (3) Shared Resource Zone (SRZ) is a set of resource services which can provide services for users. It includes a number of software resource service (SRS) and software resource composite service (SRCS). Each SRS is formed through encapsulating a corresponding SRI as a service. SRCS is the result of static integration of multiple SRS. (4) User view (UV) is the current view of the available shared resources for end users, it is composed of one or more shared software resource entities (SSRE) and/or
W. Xu et al. / Research on Software Resource Sharing Management
281
shared software resource composite entities (SRCE). It can be presented in the form of virtual desktop. Users can access the SSRE or SRCE through the virtual desktop. Each SSRE is formed through sharing an SRS, while an SRCE is formed through sharing an SRCS. (5) Sharing relations are composed of two types of links – the directed service encapsulation links from an SRI to some SRS, denoted as sei, and the directed software presentation links from an SRS to some SSRE or from an SRCS to some SRCE in a user view, denoted as spi. Then sharing relation set S={ se1, se2, …, sen, sp1, sp2, …, spm }. Various software resources can be encapsulated as resource services through virtual technology, all the software resource services form a software resource cloud pool by central scheduling and management, and on-demand sharing and service can be implemented. Based on the above concepts, all possible sharing modes of software resources can be expressed in enterprises. For example, multiple SRS can be utilized by one user view, i.e. a user view can simultaneously utilize multiple software resource services in a unified local environment, and it forms a “multiple services to one user” relation; or an SRS can be shared by two user views, so it forms a “one service to multiple users” relation, as shown in Figure 2.
Figure 2. Different sharing modes between software services and users.
2. Software resource sharing framework based on virtual desktop Based on the conceptual model of software resource sharing, in order to better utilize distributed heterogeneous software resources and implement effective and on-demand services, a software resource sharing framework based on virtual desktop is proposed. The software resource sharing framework can be divided into four layers: software resource layer, virtualization platform layer, virtual resource cloud pool layer, and application layer, as shown in Figure 3. The four layers work consistently, so distributed heterogeneous software resources can be shared, and it can provide ondemand services for users, and users can access these software resources transparently without knowing their underlying environments. The application layer is located at the top of the whole architecture, and it provides applications for end users, and interacts with users. So application layer can also be called human-machine interaction layer. It provides on-demand user interface in the form of remote virtual desktop. Different user groups may have different applications. We can use user groups to manage the user roles, so as to manage the sharing requirements of different users in groups. Virtual software resource services form the virtual resource pool. Users can search, select and combine needed software services, and software resources can hide their distributed and heterogeneous features for users.
282
W. Xu et al. / Research on Software Resource Sharing Management
On-demand and personalized resource service providing can be achieved by service selections by users. The virtualization layer provides virtual desktop technology to support desktop virtualization. Desktop virtualization is software technology that separates the desktop environment and associated application software from the physical client device that is used to access it [7]. Interactive software resources are wrapped and virtualized as services through virtual desktop technology, and they can be managed centrally and shared by distributed users. Various software resources located in different distributed places form the software resource layer. The software resource layer is the actual service providers for end users, and it is the running basis for the whole framework.
Figure 3. Software resource sharing framework based on virtual desktop.
3. Implementation of the software resource sharing platform 3.1. Structure of the software resource sharing platform According to the framework, based on the Ulteo virtual desktop technology, the structure of the software resource sharing platform is shown in Figure 4. The virtual desktop platform – Ulteo is used to implement the interactive software resource sharing platform. Ulteo -- Open Source Enterprise Virtual Desktop and Application Delivery solutions (Open Source VDI & SBC), is an open-sourced virtual desktop, and it can be tailored and further developed according to users’ requirements. Ulteo is based on Debian and Ubuntu OS, and it can allow users to run any applications on Linux and Microsoft Windows through Web browsers. The session manager server is the core for software resource sharing. Inside the session manager server, there are four management modules: user management module, service host management module, virtual service management module, system management module. Application servers in which engineering software tools are running can register with the service host management module, and then the virtual service management module can maintain an interactive software service list in
W. Xu et al. / Research on Software Resource Sharing Management
283
accordance with the application servers. The session manager server also provides 2 management portals: platform management portal and user self-management portal. Through the platform management portal, administrators of the resource sharing platform can manage the platform, including shared software joining, suspending, removing, etc. Through the user self-management portal, end users can register with this platform, and search and select interactive software tools in the platform, so ondemand software services can be achieved. The platform system management module is responsible for monitoring the platform, including status monitoring, log management, system performance measurement, billing management. Based on this platform, software resources can be rent to users, users can apply for software resources according to their own needs and also financial constraints if billing is applied. A user who works on the Web Client (a desktop or laptop computer) can use a web browser to access the web portal server and then login there and select the needed software resources to use.
Figure 4. Structure of the software resource sharing platform.
284
W. Xu et al. / Research on Software Resource Sharing Management
3.2. Communication processes in the software resource sharing platform In the software resource sharing platform based on Ulteo, when a web client accesses the platform web portal through the HTTP protocol, after login, a Remote Display Protocol (RDP) Java applet which is developed with properJavaRDP [11] is downloaded from the portal and will run in the web browser on the web client. Subsequently a session will be established between the web client and the session manager and a token will identify the session. With the provided information from the session manager, the RDP Java applet will then try to establish an application connection between the web client and the intended application server based on the RDP protocol on top of the SSL secure layer. After SSL handshaking, client-server authentication, data communication channel establishment and graphical data connection verification, then the web client can receive the RDP data packages from the application server and present a remote virtual desktop for the user. The user can then interact with the applied software tools on the application servers remotely through the remote virtual desktop. The communication processes in the platform based on Ulteo are shown in Figure 5.
Figure 5. Communication processes in the software resource sharing platform.
3.3. Example of the software resource sharing for lightweight design of axles To take the lightweight design of high speed train axles for example, the development process and the needed engineering software tools are shown in Figure 6. Once a software resource pool is set up including all the needed engineering software tools, developers can register at the platform, and select needed software services in the platform. The Pro/E, HyperMesh and ANSYS software tools run in Windows environment, while the MATLAB tool is running in Linux environment. Developers can access the software services remotely in the virtual desktop, and interact with the software tools as needed, without needing to know the underlying running environment of the software tools, as shown in Figure 7. For Developer 3 from Figure 7 in the axle lightweight design project, the remote desktop is shown in Figure 8, in which the MATLAB in Linux environment and the ANSYS in Windows environment can be accessed through the web browser in the local environment.
W. Xu et al. / Research on Software Resource Sharing Management
Figure 6. Process of lightweight design of axles.
Figure 7. Developers and their needed software resources.
Figure 8. The work desktop for Developer 3 with the MATLAB and ANSYS software service.
285
286
W. Xu et al. / Research on Software Resource Sharing Management
4. Conclusions Software resources can be encapsulated and provisioned as remote desktop interactive software services and can be shared and accessed in a distributed environment. The underlying distributed features and heterogeneous environment of the software resources can be hidden from the end users through virtual desktop technology. Users can select their needed software resources, either from Linux environment or from Windows environment, in an on-demand fashion, and can interact with the software tools in the remote virtual desktop in a unifying environment through a web browser. With the virtual desktop sharing framework, interactive software resources can be encapsulated and shared conveniently and effectively within and among enterprises, and the product development process can be well supported.
Acknowledgement This work is supported by the National Natural Science Foundation of China (51175033) and National High Technology Research and Development Program of China (2013AA041302).
References [1] Chunquan Li, Chunyang Hu, Yanwei Wang, Research of resource virtualization technology based on cloud manufacturing, Advanced Materials Research, vol.201-203 (2001), 681- 684. [2] Lin Zhang, Yongliang Luo, Fei Tao, et al, Study on the key technologies for the construction of manufacturing cloud, Chinese Journal of Computer Integrated Manufacturing Systems, 16(11) (2010), 2510-2520 (in Chinese). [3] Lei Wu, Xiangxu Meng, Shijun Liu, Service-oriented encapsulation of manufacturing resources, Proceedings of International Conference on Services Computing, July, 2007, Salt Lake City, USA: SCC, pp.727- 728, 2007. [4] Jiri Vorisek, Business Drivers for Application Servicing and a Software-as-a-Service Model. Proceedings of the Fourth International Conference on Electronic Business, 2004, Beijing, pp. 511-516. [5] L.J. Kong, W.S. Xu, N. Li, J.Z. Cha, Research on service encapsulation of manufacturing resources based on SOOA. Advances in Information Sciences and Service Sciences, 5(1) (2013) 158-166. [6] Wensheng Xu, Lingjun Kong, Nan Li, Jianzhong Cha, A Service Encapsulation Method in Cloud Simulation Platform㸪Communications in Computer and Information Science, 10 (2012) 431-439. [7] http://en.wikipedia.org/wiki/Desktop_virtualization [8] http://www.citrix.com/products/xenapp/overview.html [9] http://www.ulteo.com/home [10] Wenjun Ju, Linfu Sun, Huijuan Zhao, et al. Research on software resource sharing based on server-based computing, Chinese Journal of Computer Integrated Manufacturing Systems, 11(10) (2005) 14861490(in Chinese). [11] http://properjavardp.sourceforge.net/
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-287
287
A Lean Manufacturing Implementation Strategy and its Model for Numerical Control Job Shop under Single-piece and Small-batch Production Environment Ao BAI 1, Ping XIA, Liang ZENG Institute of Mechanical Manufacturing Technology, China Academy of Engineering Physics, P. R. China
Abstract. Lean Manufacturing (LM) is an advanced manufacturing paradigm which aims at improving the efficiency of production, enhancing the quality of products whereas eliminating various kinds of waste and cost on shop floor. It is obvious that different manufacturing environment or mode needs different lean manufacturing implementation method. In this study, a LM implementation model for Numerical Control (NC) job shop under single-piece and small-batch production environment is proposed and constructed. With the detailed analysis for the characteristics and problems of modern NC job shop, a new and specific structure of LM system is presented. Based on this structure, a LM implementation model is established, this model is made up of problem analysis layer, information platform layer and system application layer. To support the model implementation, an operational procedure is further put forward. Finally, case study in a typical NC job shop from a part fabrication manufacturer is given out to validate the model’s availability and possibility. Practical application shows that with the LM adoption and implementation model, the overall efficiency of NC job shop under singlepiece and small-batch production environment can be profoundly promoted. Keywords. lean manufacturing, lean enterprise, NC job shop, discrete shop floor, implementation model, implementation framework, digital manufacturing, singlepiece and small-batch production
Introduction Lean Manufacturing (LM) or Lean Production (LP) is an advanced manufacturing paradigm which was firstly practiced by Toyota Motor Company in its Japan factory since the middle age of 20th century. Now more than half a century has passed, LM still has powerful vitality to be used by various kinds of manufacturing enterprises around the world to achieve their organizational goals and remain competitive in global market. In general, LM means doing more work with fewer resources for a production organizations or service sectors. So, the standard concept of LM can be described as: LM is an adaptation of mass production in which work is accomplished in less time, in 1
Corresponding Author: Ao Bai, Postal Mail Box 919/621, Mianyang City, Sichuan Province, P. R. China; E-mail:
[email protected],
[email protected].
288
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
a smaller space, with fewer workers, and with less equipment, and yet achieves higher quality levels in the final product [1]. Based on the successful practice of Toyota Motor Company, many works have been done to adopt and implement LM for different manufacturing enterprises. For example: In literature [2], a LM implementation method considering cellular layout was proposed, and a case study was also given out to validate its feasibility; In literature [3], A LM process case study was discussed to demonstrate production flow analysis through Value Stream Mapping (VSM); In literature [4], a study was made to integrated LM and group technology to shorten production cycle, improve product quality and cut down product cost, thus bought biggest benefits to the whole company; In literature [5], LM principles in remanufacturing industry were given out, and a case study at a toner cartridge remanufacturer was discussed; In literature [6], a LM strategy for hot charge operation of a steel mill was shown, and substantial cost savings could be made once the full benefits of a lean production strategy are considered; In literature [7], a continuous improvement model of LM based on multi-type and small-batch production was established to help enterprises guarantee their survival and achieve their development; In literature [8], a five-stage method was proposed for Chinese enterprises to carry out LM step by step including the tools configured for each stage. From the above researches, it can be concluded that there is no uniform LM implementation standard, different enterprises need different methods to adopt or realize LM based on their real production mode and conditions. In this study, an implementation reference model of LM just suitable for NC job shop under singlepiece and small-batch production environment will be proposed. Single-piece and small-batch production is commonly known for its high flexibility and now widely used in the equipment manufacturing industry, such as ship manufacturing industry, boiler manufacturing industry, steam turbine manufacturing industry, power plant manufacturing industry and petrochemical equipment manufacturing industry. Unlike other common LM application or implementation in mass production or multi-varieties and small-batch production environment, the core of LM in single-piece and small-batch production is fully using different types of digital technologies or information systems to collect data, integrate information and share knowledge, thus helps NC job shops realize and purse exact, real-time, quantitative and highly effective manufacturing to achieve their enterprises’ lean goals. The rest of this paper is organized as follows: Section 1 analyzes the characteristics of NC job shop and then proposes a new LM structure as LM implementation strategy. In Section 2, the model of the specific LM system is established, and its main compositions are described in detail. Section 3 contains an operational procedure to help implement LM, and a simple case study from a part fabrication manufacturer is also given out. Finally, Section 4 contains the conclusion.
1. Requirement analysis of NC job shop and the new structure of LM system As we all know, as an advanced manufacturing paradigm, LM is firstly used in Toyota Motor Company and is generally regarded as being best suitable for mass production mode. The successful implementation of LM is greatly influenced by many factors such as products’ feature, production mode, management style, organizational structure or even enterprise’s culture. So, it should take different method to implement LM based on the enterprises’ real condition or situation. For the NC job shop under single-piece
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
289
and small-batch production environment, numerical machine tools or equipments are generally installed or deployed follow the so called process concentration principle, the production batch is usually very small (even just one) and most customers’ orders are completely new and not standard, and different types of manufacturing resources are need to be prepared before the work starts. Therefore, the overall efficiency of NC job shop under single-piece and small-batch production environment is generally in a low status, and it becomes very difficult to manage and control production tasks execution with less time and fewer resources to satisfy LM requirement. According to our recent surveys from several typical equipment manufacturing enterprises, at least three main and typical problems are now widely existed in their NC job shops under single-piece and small-batch production environment: x
First, from the perspective of equipment, due to the lack of suitable or optimal cutting parameter for variety products or parts, the efficiency of singe numerical machine tool (the numerical machine tool is general equipment in NC job shops) is usually in a low status, so some numerical control machines are easy to become bottleneck nodes to prevent the task or job finishing in a planned time period. What’s more, some complex numerical control programs may not be examined and simulated before executing, so sometimes operation accidents may occur to damage the machine tools and bring unnecessary loss.
x
Second, from the perspective of resource, waiting time for different manufacturing resources is generally too long, if not well prepared, it’s usually very common that the production tasks/jobs have arrived at one work station but the needed manufacturing resources still in warehouse or other places; furthermore, the management of manufacturing resources is just in a mess, the manufacturing resources may be destroyed, overdue or even missed.
x
Third, from the perspective of process, the production task is not well arranged and scheduled to balance the work load among different workstations or work centers. Due to a lack of production scheduling systems, the production scheduling process is mainly based on the production managers’ skills or experiences, and is generally complex and timeconsuming. So facing with a changeable and uncertain customers’ requirement, it’s usually very hard and difficult to make feasible production plans even for a skillful product manager or expert.
To deal with these three main problems above, it is in great need to take many actions. According to our previous researches and literature reviews, we select Lean Manufacturing as our main way to help promote the efficiency of NC job shop under single-piece and small-batch production environment. The practice way and implementation model of LM in our study is changed and is different from the original way firstly practiced by Toyota Motor Company. Due to the main problems of NC job shop as mentioned previously, we will use information and digital technologies or tools to realize exact, real-time and effective management towards manufacturing execution process in single-piece and small-batch production environment. Based on this consideration, we will change the structure of traditional LM system from Figure 1.(a) to Figure 1.(b), just as depicted in Figure 1.
290
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
Figure 1. The change of a lean manufacturing system’s structure.
From the top part of figure 1(Figure 1.(a)), we can see that the foundation of a general or traditional lean manufacturing system is elimination of waste, the core is worker involvement, the two supporting pillars are just-in-time production and automation, and the goal is to satisfy customer's focus or need. Unlike general lean manufacturing system, from the bottom part of figure 2 (Figure 1.(b)), we can see that the foundation of a specific lean manufacturing system for NC job shop in single-piece and small-batch production environment is data collection and information integration, the core is real-time and visible management, the two supporting pillars are standardization and digitalization, but the goal is the same as general lean manufacturing system, just to satisfy customers’ focuses or needs. We will further discuss the two supporting pillars in the specific structure of a lean manufacturing system. Standardization means that every operation in NC job shop should be strictly in accordance with quality management architecture and standard procedure. Due to the complexity of NC job shop and changeable tasks, some workers incline to do jobs in an unconstrained and free manner, which can easily make errors or bring interruption to whole production. So, training workers before beginning working
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
291
and make some encouragement or punishment policies are necessary indeed to enforce them follow the standard procedures. On the other hand, digitalization is also needed to help workers standardize their behaviors. With digitalization technologies and tools, many procedures or business processes can be easily executed in a standard and uniform manner. For example, the correctness of production data or parameter can be guaranteed, the indispensable step may not be omitted, and it will become very convenient to track and trace task because key or important components is tagged and their historical status or activities are recorded in backend database in earlier time. What’s more, production managers will obtain more useful information to assist them make quick and right decision to manage and control manufacturing field in a real-time manner.
2. Implementation model of lean manufacturing system Based on the specific structure of lean manufacturing system in single-piece and smallbatch production environment, the implementation model will be presented and further illustrated. The implementation model of lean manufacturing system for NC job shop mainly contains three layers, which are Problem Analysis Layer (PAL), Information Platform Layer (IPL) and System Application Layer (SAL). Each layer has its different role and function, which can be described in detail as follows: x
In Problem Analysis Layer, there are two important strategies to support LM, which are management innovation strategy and technology innovation strategy. With management innovation strategy, business process in NC job shop are diagnosed, optimized and re-engineered, and organization’s structure is also rebuilt in order to satisfy the standardization requirement of LM. With technology innovation strategy, information and digital systems are constructed and different kinds of production processes are improved to support the LM’s digitalization need.
x
In Information Platform Layer, four information platforms are planned and built to support the implementation of LM system under single-piece and small-batch production environment, which are Numerical Processes Optimization and Machine Monitoring Platform (NPOMM-Platform), Manufacturing Resource Quick Preparing and Management Platform (MRQPM-Platform), Production Planning and Scheduling Platform (PPSPlatform) and System Integration Platform (SI-Platform). The goals of each platform are as follows: a) the NPOMM-Platform is to enhance the efficiency of single numerical machine tools; b) the MRQPM-Platform is to shorten the waiting time of various manufacturing resources in job shop; c) the PPSPlatform is to enhance the efficiency of overall production process; d) the SIPlatform is integrate different platform to better exchange the data or information to achieve maximum system performance. According to these four platforms, the LM model in NC job shop can be changed from theory into practice.
x
In System Application Layer, different specific information or digital systems are deployed to enhance the efficiencies of machine running, resource delivery and process execution. The details of each information system are
292
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
listed in table I. From table I, we can see that different information systems play different roles in LM system to achieve their unique effects. It should be noted that each information system should not work separately from each other, so they must be integrated well and work just as a whole and uniform part. Beside these three layers above, it should emphasis that the foundation of LM model in NC job shop under single-piece and small-batch production environment is data collection, information integration and knowledge share. According to this foundation, real-time, exact, quantitative and highly effective manufacturing can be easily and conveniently achieved to manage and control NC job shop in a lean way. Obviously, the lean of a manufacturing enterprise is larges depending on its workshop, including fabrication job shop and assembly line. If its NC job shop is lean, it will be great motivation for the lean of overall enterprise. Standardization on
Lean Manufacturing
Management innovation
• Numerical control job shop under multi-varieties and small-batch production environment
1 Problem Analysis Layer
Digitalization D
Technology innovation • Information system construction • Mechanical processes improvement
• Process diagnosis, optimization and re-engineering • Organization rebuilding
Integration
D: System integration platform 2 Information Platform Layer
3 System Application Layer
• Integrate different platform to achieve better performance
A: Numerical processes optimization and machine monitoring platform
B: Manufacturing resource quick preparing and management platform
C: Production planning and scheduling platform
• Enhance the efficiency of single machine
• Shorten the waiting time of manufacturing g resources
• Enhance the efficiency of overall process p
• A1: Numerical Process Optimization System (NPOS) • A2: Machine Process Capability Management System(MPCMS) • A3: Numerical Program Verification and Simulation System (NPVSS)
• B1: Material Delivery System (MDS) • B2: Tool Life-Cycle Management System (TLCMS) • B3: Process Equipment management System (PEMS)
• C1: Computer-Aided Scheduling System (CASS) • C2: E-Kanban System (EKS)
Data collection/Information integration/Knowledge Share
Figure 2. New implementation model of lean manufacturing system for NC job shop.
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
293
Table 1. Summaries of information systems supporting lean manufacturing implementation in NC job shop. Code
Full name
Short name
Managed objects
Functions and roles in lean manufacturing system
A1
Numerical Process Optimization System
NPOS
Equipments/Machines
Optimize the cutting parameters (and store them) to gain high efficiency of machine running
A2
Machine Process Capability Management System
MPCMS
Equipments/Machines
Manage the machines and repair them in a controlled time period to keep machines in a available status
A3
Numerical Program Verification and Simulation System
NPVSS
Equipments/Machines
Verify the correctness of numerical programs to prevent error of programs and avoid damage of machines
B1
Material System
MDS
Manufacturing resources
Prepare materials and delivery them in a JIT mode
B2
Tool Life-Cycle Management System
TLCMS
Manufacturing resources
Prepare tools and manage their whole lifecycle exactly to avoid the waste of tools
B3
Process Equipment management System
PEMS
Manufacturing resources
Prepare or manage process equipments and delivery them in a JIT mode if one process needs
C1
Computer-Aided Scheduling System
CASS
Manufacturing processes
C2
E-Kanban System
EKS
Manufacturing processes
Delivery
Generate feasible production orders or tasks and re-schedule them if any abnormal event occurs Display the information from manufacturing field in a real-time manner to reflect the status of tasks execution
3. Operational procedure of lean manufacturing system and its case study In order to implement lean manufacturing in modern NC job shop under single-piece and small-batch production environment, we further propose an operational procedure based on the implementation model above. The operational procedure will integrate two parts: general information systems and lean manufacturing-related information systems, just as depicted in Figure 3. The general manufacturing information systems are Production Planning System (PPS), Computer Aided Process Planning (CAPP) system, and Manufacturing Execution System (MES). These three general information systems provide various types of important and basic production-related data to support the running and execution of production orders or tasks in job shop. The lean manufacturing-related information systems are these systems which are proposed in Figure 2. (See detail in table 1) to enhance the efficiency of single machine tool, shorten the waiting time of manufacturing resources and enhance the efficiency of overall process. With the lean manufacturing-related information systems, the execution of production orders or tasks in job shop will be managed, controlled and optimized in a more exact and quantitative manner to realize the lean goal. Combining general information systems and lean manufacturing-related information systems together, the operational procedure of lean manufacturing in job shop under single-piece and small-batch production environment will be described as follows, which contained four main steps:
294
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
x
The first step is to accept the two important production instructions from PPS and CAPP simultaneously, which are production task/job information and process rule information;
x
Then, in the second step, with MDS, TLCMS and PEMS, the availability of main manufacturing resources such as materials, tools and process equipments which are needed in production processes are carefully checked, if the current manufacturing resources can’t satisfy the requirement of production tasks, then manufacturing resources purchase orders or self-production orders will be generated instantly;
x
In the third step, different production tasks are scheduled and planed in CASS based on the data from PPS and MES. PPS provides CASS with production planning data (such as material code, material name, required amount, delivery time, special requirements, et al.), and MES provides CASS with production execution status data (such as progress, finished amount, workers’ workload, equipments’ capacity, et al.). According to this steps, production task/job will be arranged in a more reasonable and feasible style to achieve a shorter production cycle time and a well-balanced workload in every workstation;
x
In the forth step, production tasks are executed sequentially in each workstation or work center, at the same time MDS, TLCMS and PEMS delivery the manufacturing resources in a Just-In-Time (JIT) mode without long waiting. In order to promote the performance of numeral machine tools, NPOS, MPCMS and NPVSS also work together to generate optimal cutting parameters and verify the validation of NC program. EKS, a production information display system deployed in manufacturing field, will show the real-time production information or production index to reflect the status of production process. With this information displayed by EKS, production managers or team leaders could take immediate and active action to eliminate the error or deviation between the planned process and real execution process.
By this operational procedure (See detail in figure 3), general information system and lean manufacturing-related information system will be integrated into a whole lean manufacturing system. With this system, the physical items flow can be driven by virtual information flow, and the production execution can be worked in a more exact and effective way, thus the management and control level of NC job shop in singlepiece and small-batch production environment can be greatly enhanced. With this lean manufacturing implementation strategy and its mode, and follow its corresponding operational procedure, a implement lean manufacturing practice have been made for a modern NC job shop under single-piece and small-batch production environment in a typical part fabrication manufacturer. In order to strongly push forward this complex system engineering, a project team are built, which mainly consists of enterprise’s top leaders, lean manufacturing experts, common business persons, system analysis persons and system development persons. From the current execution progress of lean manufacturing system implementation, our work is being well controlled and managed, some accomplishments have been made, a exact, effective and quantitative manufacturing have been realized preliminarily. With the
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
295
advancement of lean manufacturing project, it is estimated that the work potential and efficiency of NC job shop has enhanced to nearly 10 percent over the past two years. • Operational Procedure • General Information system
Start
1 PPS
• Lean manufacturingrelated Information system
Accept the production task and process rule
CAPP MDS
2 Check the he availa availability of manufacturing resources
TLCMS PEMS
3 Schedule and S nd plan production n task
4
CASS
Production Pro task execution
NPOS MPCMS
execute 1st process NPVSS
EKS execute t n process
MES
…
…
execute N process
EEnd End d Information flow
Symbols:
Process
System
Figure 3. Operational procedure of lean manufacturing system for NC shop floor.
4. Conclusion With the lean manufacturing model described above, we have started to construct a lean manufacturing prototype system for a typical NC job shop under single-piece and small-batch production environment to achieve exact, effective and quantitative manufacturing, which will be a necessary way to realize lean enterprise for modern discrete manufacturers. Our proposed lean manufacturing system largely adopts the information/digital technology to provide managers or leaders with real-time decision
296
A. Bai et al. / A Lean Manufacturing Implementation Strategy and Its Model
supporting information to make some quick decisions, and the information flow is used to drive physical items flow in the right time to make everything ready before work starts. According to our lean manufacturing system, the efficiency of a single machine tool can be greatly promoted, the waiting time for different manufacturing resources can be sharply declined and the performance of overall manufacturing business processes can be remarkably improved, thus realize exact manufacturing and high efficiency manufacturing to achieve lean goal, and finally better satisfy the customers' focuses and requirements. It should be noted that the proposed lean manufacturing model is just suitable for single-piece and small-batch production mode in NC job shop, for other production modes in other manufacturing fields, new and specific lean manufacturing model still need to be carefully explored and practiced based on their real condition or situation. It is still deeply believed that there is no common LM implementation way suitable for all types of manufacturers.
Acknowledgement The authors would like to thank the support from S&T Special Project of China Academy of Engineering Physics (contract No.9120601) and National Natural Science Foundation of China (contract No.50675201), which give us a good opportunity to implement a specific and unique lean manufacturing system for NC job shop in singlepiece and small-batch production mode, and the authors would also like to show great and deep appropriate to other colleagues for their hard and fruitful work.
References [1] Mikell P. Groover, Automation, Production Systems, and Computer-Integrated Manufacturing, third Ed. Tsinghua University Press, Beijing, 2011. [2] L. N. Pattanaik, B. P. Sharma, Implementing lean manufacturing with cellular layout: a case study, International Journal of Advanced Manufacturing Technology 42 (2009), 772–779. [3] Rahani AR, Muhammad al-Ashraf, Production Flow Analysis through Value Stream Mapping: A Lean Manufacturing Process Case Study, Procedia Engineering 41 (2002), 1727–1734. [4] Y. Hu, F Ye, Z Fang, A study on the integration of lean production and group technology, Proc. of the 2000 IEEE International Conference on Management of Innovation and Technology, ICMIT 2000, Singapore, 839-842. [5] J. Ostlin, H Ekholm, Lean Production Principles in Remanufacturing A Case Study at a Toner Cartridge Remanufacturer, Proc. of the 2007 IEEE International Symposium on Electronics and the Environment, ISEE 2007, Orlando, 216-221. [6] J. Storck, B. Lindberg, A lean production strategy for hot charge operation of a steel mill, Proc. of the 2007 The IET International Conference on Agile Manufacturing, ICAM 2007, Durham, 158-167. [7] M. F. Tchidi, Z. He, A. R. Agnantounkpatin, The continuous improvement model of Lean Production based on Multi-type and small-batch production, Proc. of 2009 International Conference on Information Management, Innovation Management and Industrial Engineering, ICIMIMIE 2009, Xi'an, 469-472. [8] L. Chen, B. Meng, Research on the Five-stage Method for Chinese Enterprises to Implement Lean Production, Proc. of 2010 International Conference on Logistics Systems and Intelligent Management, ICLSIM 2010, Harbin, 1135-1138.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-297
297
Uncertainties in Cloud Manufacturing a
Yaser YADEKAR a, Essam SHEHAB a,1 and Jorn MEHNEN a Manufacturing and Materials Department, Cranfield University, UK
Abstract. The use of new technologies in information systems and advanced networks has allowed the manufacturing industry to apply new, complex manufacturing systems based on advanced networks and new computing technologies. Cloud Manufacturing, as a new paradigm, is continuously gaining attention by academics and industry. However, there is still a lack of understanding of the concept of Cloud Manufacturing, its implementation and – in particular – the uncertainties coming with this new technology. This paper focuses on defining and evaluating uncertainties in Cloud Manufacturing, provides a list of identified uncertainties and proposes uncertainty management for Cloud Manufacturing. Keywords. Cloud Manufacturing, Cloud Computing, Uncertainties
Introduction The use of new technologies and networks are becoming critical success factors in any business enterprise. Enterprises are trying to gain a competitive advantage in global markets by using the latest technologies, along with networks, to create collaboration. Currently, enterprises rely on many advanced network technologies, such as Agile Manufacturing (AM), Network Manufacturing (NM), and Manufacturing Grid (MG) to operate a single manufacturing task from the integration of widely distributed sources [1]. These manufacturing networks enable collaboration and sharing of manufacturing resources between manufacturing units. Today, manufacturing industry is facing problems in these existing network technologies that affect production within the manufacturing industry. These problems include: the sharing of manufacturing resources, where the resources are centralized into the network but cannot be distributed through the network due to lack of manufacturing service management in the network; and the inability to access the manufacturing hard resources (equipment) in the manufacturing network due to complications in transferring hard resources into the network [1,2,3]. Another problem is the difficulty of knowledge sharing between manufacturing units, suppliers, customers, and partners due to geographical dimension, countries’ regulations, different operation systems, and amount of data and complex processes in manufacturing [4]. The sharing of knowledge can provide development strategies in how to both enhance competitive advantage and understand manufacturing practices within the industry [5].
1
Essam Shehab, Manufacturing and Materials Department, Cranfield University, UK; E-mail: e.shehab @cranfield.ac.uk
298
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
To address these problems affecting the manufacturing industry, a new manufacturing model called Cloud Manufacturing has emerged. The Cloud Manufacturing concept is to integrate existing manufacturing technologies and new computing technologies so as to distribute manufacturing resources and capabilities between manufacturing units and divisions. Cloud Manufacturing transforms and encapsulates manufacturing physical resources and manufacturing capabilities into a Cloud by using technologies such as Cloud Computing, Internet of Things (IOT) and virtualization. Then, Cloud Manufacturing provides those manufacturing resources and manufacturing capabilities as services through an existing manufacturing network for the users.
1. Research Motivation The manufacturing industry is changing quickly because of the rapid growth of advanced technologies in information systems and networks, which allow for collaboration around the world. Also, there is an ever increasing demand to provide service-oriented manufacturing [1], distribute manufacturing resources and capabilities, and increase productivity. According to a European Commission survey conducted in 2012 [6], 80% of organisations that adopt Cloud computing technology have reduced their costs by 10-20%, enhanced mobile working (46%), productivity (41%), standardisation (35%), as well as new business opportunities (33%) and markets (32%). The transformation of existing manufacturing systems to new advanced and complex systems, such as Cloud Manufacturing, that incorporate many state-of-the-art technologies, can be a big challenge for any enterprise. This transformation creates uncertainties in the new system that can impact upon design, implementation, and operation of the manufacturing model. Any chosen system must have the capability to perform in an uncertain environment [7], where technical, political, economic and other factors can be factors in an uncertain environment. So, there is a need to understand and tackle the uncertainties in Cloud Manufacturing. To address this issue, there is a need to understand and define Cloud Manufacturing and identify and manage the uncertainties in Cloud Manufacturing.
2. Cloud Manufacturing Cloud Manufacturing is a new and emerging area of research within the field of Information Technology. The number of studies discussing Cloud Manufacturing in the literature is continuously increasing and gaining attention. Cloud Manufacturing can be defined as “A new service-oriented networked manufacturing model, and is an intersectional and mixed product of advanced information technology, manufacturing technology, Cloud computing and internet of things” [2]. Cloud Manufacturing provides four deployment models [8], public Cloud, private Cloud, community Cloud and hybrid Cloud. A public Cloud is offering services and infrastructure from an off-site, third party service provider via the Internet; a private Cloud provides an enterprise with the same services and infrastructure as the public Cloud, but is managed internally, with only the one business using the Cloud services; a community Cloud is used and supported by several organizations that have mutual
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
299
interests and concerns; a hybrid Cloud consists of two types of Clouds: public Cloud and private Cloud. In addition, there are two delivery models [9,10]. The first model depends on the information technology resources (storage, software, server, and network); whereas the second model depends on manufacturing resources and capabilities (design, production, simulation, and experimentation). The information technology model has three service delivery models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), whereas the manufacturing resources and capabilities model deliver all the manufacturing resources and capabilities involved in aspects of manufacturing. Also, there are three main categories for Stakeholders in Cloud Manufacturing [1,9,10]; Cloud users who are considered consumers or organizations subscribed to a service; Cloud resource providers who are responsible for delivering manufacturing resources and manufacturing capabilities to Cloud users and Cloud operator; and Cloud operators who own and manage Cloud Manufacturing. Cloud Manufacturing is supported by four main information technologies: Cloud Computing that provides computing networking platform, Internet of Things (IoT) that connect physical objects and automatically exchange data over the Internet by using supporting technologies [11], virtualization that create a virtual version of a physical resource or capability and service-oriented technologies that communicate between different types of software applications. Finally, manufacturing networks and models allow manufacturing enterprises to communicate with suppliers and customers and exchange detailed data with each other. These networks include: Manufacturing Networks, Agile Networks and Manufacturing Grids.
3. Current implementation of Cloud Manufacturing Globalization, advanced communication networks and new technologies have allowed a small number of new established companies to implement some form of Cloud Manufacturing system in their business. 3D Creation Lab and Shapeways are examples of companies that use a Cloud Manufacturing system to provide 3D printing services online [12,13]. The idea is to allow individuals to become members in their platform, where they can share ideas, create customized products and gain access to 3D printing technology. The first step in the process is to design the product by using any design software tool. Next, the design file is uploaded to company’s platform. Then, the system calculates the total cost of this product and the member orders and pays for the service. Next, the printing facility begins to prepare and print the product. Finally, the product ships to the member. PhotoBox is specialized in digital photo services. Their online services include photo printing, creating Photo books, cards, printed t-shirts, wall decor, photo mugs, personalised mobile phone cases and more [14]. First, the customer needs to upload their photos into PhotoBox’s platform and select what type service that required. Next, the platform allows the customer to be part of the design process by choosing type, shape and color of the product. Finally, the customer pays and then receives the product through the mail. CreateSpace is an on-demand publishing Company, part of the Amazon group of companies. Their services include the publishing of books, music, and video through
300
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
Internet retail outlets, private website, bookstores, retailers, libraries, and academic institutions [15]. After joining their platform, a member can access their dashboard and choose tools to build and publish their book in different formats (book, EBook, audio book). The platform provides a range of steps that include preparation of the writing material, setup of the book (cover design, page color, ISBN number), proofing and book distribution. After finishing all steps, CreateSpace publish the book and make it available in one or more book stores. MFG.com is a marketplace for both buyers who are looking for resources or capability for their product and suppliers that provide material or services [16]. The idea of MFG.com is to provide a platform to link enterprises to manufacturing resources and capabilities. Uploading CAD files, looking for the right supplier, sending a quote, rating supply service and tracking order delivery are activities conducted by the MFG.com platform. 3Sourceful is another online marketplace that provides a platform to connect enterprises to a network of manufacturing resources and capabilities [17]. Quirky, is another example of Cloud Manufacturing [18]. Their business model is as follows: an individual submits an idea to Quirky; Quirky presents this idea to a group of industry experts, friends and community members to decide whether to manufacture this idea or not; if Quirky agrees to manufacture this idea, the individual and community members become part of design process with them; finally, Quirky manufacture this idea and sell it through their website and other retailers. Implementing some form of Cloud Manufacturing system has allowed these companies to: reduce time to manufacture a product or to receive a service; produce new inventions; reduce cost of production and service; and create collaboration. There is a need for complete Cloud Manufacturing implementation in order for the enterprises to receive full benefits, but Cloud Manufacturing is a new concept in manufacturing and needs time to become accepted among enterprises.
4. Uncertainty The world is undergoing rapid transformation to becoming a more complex environment as a result of new technologies and advanced communication, new innovations and globalization. These changes lead to new situations that are unknown and unpredictable and they produce doubt though a lack of assurance and confidence. These situations refer to uncertainties and risks that need to be understood and dealt with in the real world. Uncertainties can influence the decision-making process [19]. The ability to understand and manage uncertainty can enhance the decision-making process and allow enterprises to gain competitive advantage. In spite of the fact that the term ‘uncertainty’ has existed since the time of the Ancient Greeks, there is still controversy among the scholars about its actual meaning. According to [20], the various definitions of risk and uncertainty that exist in literature depend on the problem itself, where every discipline has its own definitions. Although many scholars believe that uncertainty and risk are one concept, some researchers and decision makers like to distinguish between uncertainty and risk. The following definitions of uncertainty and risk are considered the most appropriate for this research: Uncertainty is “a state of having limited knowledge where it is impossible to exactly describe existing state or future outcome, more than one possible outcome” [21].
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
301
Risk is “a state of uncertainty where some possible outcomes have an undesired effect or significant loss” [21].
5. Uncertainty Management According to Ward and Chapman [22], replacing the term ‘risk’ with ‘uncertainty’ in risk management processes can enhance the identification and managing of uncertainties in the project, since risk is an ambiguous term and considered as synonym to ‘threat’. Moreover, uncertainty management will focus on sources, different areas, and response options of the uncertainties in the project. This research will use term “uncertainty management” instead of “risk management”. Uncertainty management processes have evolved and continue to evolve due to the importance of those processes in organizations. A number of Institutes have presented uncertainty management processes such as: PMBOK [23], British Standards Institution, UK Association for Project Management, and International Organization for Standardization (ISO) [24]. In the following a new approach for managing the uncertainties in Cloud Manufacturing will be presented. This approach consists of four phases: identification, assessment, response, and control. •
•
Identification: Identifying the types and sources of uncertainties that exist in the project or system is the first stage in uncertainty management, with documentation of uncertainties in the early stage of the project being an essential step to provide knowledge about the uncertainty. Uncertainty can be identified by observation, measurement and recording of poorly understood initial conditions, random effects, uncontrollable effects and unknown effects. There are also other sources of uncertainty, such as incomplete information, lack of knowledge, vagueness, and ambiguity that exist in different models and experiments. The Delphi technique, survey, brainstorming, documentation reviews (academia, published industrial reports), SWOT analysis, diagramming techniques and checklists [23] are methods and techniques used to identify uncertainties. The result from this process is an uncertainty list, which contains a detailed description of uncertainties of a project. Assessment: In this stage, each identified uncertainty is assessed by applying qualitative and quantitative analysis to determine their priority in the project, where the process of prioritizing shows the impact and likelihood of an uncertainty. This process allows project members to concentrate on high priority uncertainty. Qualitative analysis depends on the project team’s assessment for each uncertainty to determine their probability and impact in the project; a rating is assigned to each uncertainty based on the probability of uncertainty occurring and its impact in the project. In quantitative analysis a numerical priority rating is assigned to each uncertainty. An uncertainty with a numerical priority rating can provide information on how to deal with uncertainties in the project. Some methods of quantitative analysis include sensitivity analysis that examines the uncertainty of system output that is associated with input parameter values, to the endpoint of interest [25]; Monte-Carlo Simulation
302
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
•
•
relies on repeated random sampling of uncertainties to obtain numerical results, and expected monetary value (EMV) that tests a range of outcome in different scenarios [26]. The outcome from this process is a classification of uncertainties in the project, where each uncertainty can be classified as low, medium or high. Response: The purpose of this stage is to develop strategies to deal with uncertainties in the project. This can be very helpful for decision makers to handle both opportunities and threats in the project by reducing threats and enhance opportunities [23,26]. Avoidance, transference and mitigation are response strategies to negative uncertainties; whereas, acceptance, exploit, enhance, and share are response strategies to positive uncertainties. Control: Uncertainty management is an ongoing process and needs to be controlled during the project duration. The control process is composed of many activities; such as: applying response strategies, monitoring remaining uncertainties, and identifying new uncertainties.
6. Uncertainty List After conducting a comprehensive review of previous studies and reviewing published industrial reports, a summary of 12 uncertainty types have been identified and categorized into three categories. To determine the uncertainties in Cloud Manufacturing, the focus was on Cloud Computing Technology and Cloud Manufacturing literature and industrial reports. The selected industrial reports were from well-known organisations interested in Cloud Computing Technology, including: Cloud Security Alliance (CSA) [27], National Institute of Standards and Technology (NIST) [28], European Network and Information Security Agency (ENISA) [29], European Commission [30], and IBM Centre for the Business of Government [31], as can been seen in Table (1). 6.1. Technical and security Security uncertainties including hackers, insecure interfaces, and integrity of the system are the major concerns in terms of security in the Cloud environment and many enterprises do not want to adopt this technology because of these issues. A survey conducted in 2012 by Intel IT Centre [32] showed that 87 percent of IT professionals from different countries (US, UK, Germany, China) were concerned about security issues in the public Clouds and 69 percent in the private Cloud. Also, the complexity of Cloud Manufacturing can create a fertile environment for security breaches with the losing of control of data and applications that are critical to the enterprise. Security breaches and capabilities includes anonymous access, reusable tokens or passwords, clear-text authentication or transmission of content, inflexible access controls or improper authorizations, limited monitoring and logging capabilities, and unknown service or API dependencies.
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
303
Table 1. Uncertainty list Uncertainty Category
Uncertainty Type Security
Technical and Security
Availability Manufacturing resources and capabilities Interoperability Privacy Quality of Service
Legal, Ethics and Regulations
Transparency Vender-Lock in Setting Prices
Economics
Bandwidth Migrate into Cloud Consumption
Short Description Password and key cracking, launching dynamic attack points, hosting malicious data, botnet command and control, building rainbow tables, Trojan horses, Back Doors, Viruses and worms, Sniffing. Network outage and system failures OR Inability to access Cloud services due to lack of network connectivity. Transform manufacturing resources and capabilities into Cloud Ability to work together with different information systems, more than one Cloud, and different software applications. Data control, Data Location, Data Disclosure, Data transition Provides a guarantee of performance, availability, security, reliability and dependability. Lack of transparency into their infrastructure they should allow customers OR not reveal how it grants employees access to physical and virtual assets. Inability of a customer to move their data and/or programs away from a Cloud computing service provider. Changing Cloud monthly service fees due to access to new technology and need for consume more of Cloud resources. Raise the cost of using network communication. Cost of moving data and workloads into Cloud. User consumption-based billing and metering.
Due to the complexly of Cloud Manufacturing systems that involve the need for numerous advanced technologies and networks to be integrated efficiently, many technical uncertainties exist in the cloud. Among these technical uncertainties are: transferring manufacturing resources and capabilities into Cloud; network outage and system failures (availability); the ability to work together with different information systems; more than one Cloud, and different software applications (interoperability). Although many Cloud providers offer guarantee availability for their services [1], but there can be incidents, such as the Gmail outage for three hour in 2009 and Salesforce service shutdown for six hour in 2008 [33]. This can create doubts about Cloud capabilities for delivering critical data and applications for enterprises. The Cloud providers guarantee to deliver Cloud services to customers under any circumstances, but sometimes enterprises cannot access their data and Cloud resources due to network outage and system failures. The outage may be permanent, as a provider company has gone out of business, or temporary, as a result of failure in the provider company’s systems [34]. Either way, failure to provide data and Cloud resources can be a disaster for the enterprise, which cannot function without its data and Cloud resources. Both manufacturing resources and manufacturing capabilities are core components of a Cloud Manufacturing system and many technologies (such as Internet of Things and wireless sensors) are needed to coordinate between the Cloud Manufacturing system and manufacturing process. The amount of data collected from different equipment and tools can lead to overloading in the network, making data exchange very slow in the Manufacturing Cloud system. Also, more storage space could be needed in the Cloud due to data collection of real-time manufacturing resources,
304
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
requiring more process resources from the Cloud to handle this data. All those issues can result in Cloud Manufacturing system failure. The aim of Cloud Manufacturing is to share manufacturing resources and capabilities between different parties (manufacturing units, suppliers, other enterprises and customers). However, managing different information systems and different manufacturing systems under a Cloud Manufacturing umbrella can be a difficult task for both enterprises and Cloud providers. For example, legacy systems are substantial and irreplaceable in many enterprises and it is costly and time consuming to put them into the Cloud. Moreover, many Cloud systems’ architectures are designed as closed, which prohibits interaction with other Cloud systems. 6.2. Legal, ethics and regulations Differences in terms of privacy, lack in control of data and the problem that the location of the data in the Cloud might not be known may create conflicts with regulations and laws in an enterprise’s country [35]. In addition, private enterprise data that exist in enterprise’s premises might be accessible through Cloud service [36]. An example of privacy concerns the European Union [37] and the US is that these counties have strict laws which prohibit moving certain types of data outside the enterprise’s country. Availability, performance, and quality are the major concerns when enterprises use Cloud services. The relationship between Cloud providers and their customers needs to be more efficient and effective by using standards, agreements and regulations to clarify the responsibilities and duties of each party in a Cloud Manufacturing system. The Cloud providers need to reassure their customers about their services by using Service Level Agreements (SLA) [1]. Also, SLA can allow more transparency into the Cloud by providing standards between Cloud provider and their clients to uncover what is happening in the Cloud [38]. However, until now, there is no official standard for Cloud Computing Technology. However, in 2011 Cloud Security Alliance (CSA) announced that there is ongoing development of cloud security and privacy standards in collaboration with International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). The standard is expected to be a guideline or code of practice for Cloud Computing Technology. Moreover, different Cloud providers can create a vendor lock-in situation, where each Cloud provider has its own way of running the Cloud, which is difficult to enterprises to switch to other providers or to transfer data back to the enterprise’ premises [39]. This limits the choices for enterprises when choosing between other Cloud providers in the market or moving data and services between providers. 6.3. Economics From an economic perspective, the purpose of using Cloud Manufacturing is to reduce the cost of using manufacturing services for whole lifecycle of manufacturing [8]. Cloud Technology allows enterprises, especially SMEs, to use computing resources and capabilities at low cost. Research conducted by Hosseini [40] indicates that the implementation of Cloud technology in an enterprise over five years can have financial benefits that cost 37% less than traditional systems.
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
305
However, the implementation of Cloud Manufacturing can raise the cost of using network communication (bandwidth) to send and receive data from the Cloud. Moreover, using Cloud Manufacturing for large enterprises can be costly due to the need of more Cloud resources for their large projects [39]. Also, there is a need for consumption management to trace all activities to calculate the consumption for each user in the Cloud [1].
7. Conclusions Cloud Manufacturing is a combination of latest technologies and advanced manufacturing networks that can provide manufacturing resources and manufacturing capabilities as services to the enterprises in global markets. Uncertainties in Cloud Manufacturing can be a major problem of Cloud Manufacturing implementation in manufacturing industry. Also, until now there is no full implementation of Cloud Manufacturing in real life in order to receive full benefits of Cloud Manufacturing and understand the role of uncertainties in Cloud Manufacturing. Examples of companies that implement some form of Cloud Manufacturing as business model and depend on the web to provide products and services to the consumers are CreateSpace, Quirky, MFG.com, PhotoBox, 3Sourceful3, 3D Creation Lab, and Shapeways. This paper focuses on uncertainties regarding security, technical, organization, and economic, also proposes an uncertainty management approach for the Cloud Manufacturing. The suggested future research is to identify uncertainties for each deployment models of Cloud Manufacturing, and how to reduce uncertainties in Cloud Manufacturing.
References [1] X. Xu, From cloud computing to cloud manufacturing, Robotics and Computer-Integrated Manufacturing 28 (1) (2012), 75-86. [2] X. Gao, M. Yang, Y. Liu, and X. Hou, Conceptual model of multi-agent business collaboration based on cloud workflow, Journal of Theoretical and Applied Information Technology 48 (1) (2013), 108-112. [3] Y. Laili, F. Tao, L. Zhang, and B. R. Sarker, A study of optimal allocation of computing resources in cloud manufacturing systems, The International Journal of Advanced Manufacturing Technology 63 (2012), 671-690. [4] O. F. Valilai, and M. Houshmand, A collaborative and integrated platform to support distributed manufacturing system using a service-oriented approach based on cloud computing paradigm, Robotics and Computer-Integrated Manufacturing 29 (1) (2013), 110-127. [5] Y. Zhang, and Y. Jin, Research on knowledge management for group enterprise in cloud manufacturing, in: Proceedings - 2012 International Conference on Computer Science and Service System (2012), 1946-1950. [6] European Commission, Small and medium-sized enterprises (SMEs), (Accessed on June 26, 2013), http://ec.europa.eu/enterprise/policies/sme/index_en.htm. [7] S. C. L. Koh, and S. M. Saad, Managing uncertainty in ERP-controlled manufacturing environments in SMEs, International Journal of Production Economics 101(1) (2006), 109-127. [8] F. Tao, L. Zhang, V. C. Venkatesh, Y. Luo, and Y. Cheng, Cloud manufacturing: a computing and service-oriented manufacturing model, Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 225 (10) (2011), 1969-1976. [9] X. V. Wang, and X. W. Xu, An interoperable solution for Cloud manufacturing, Robotics and ComputerIntegrated Manufacturing 29 (4) (2013), 232–247. [10] D. Wu, M. J. Greer, D. W. Rosen, and D. Schaefer, Cloud manufacturing: Strategic vision and state-ofthe-art, Journal of Manufacturing Systems 32(4) (2013), 564-579.
306
Y. Yadekar et al. / Uncertainties in Cloud Manufacturing
[11] L. Atzori, A. Iera, and G. Morabito, The Internet of Things: A survey, Computer Networks 54 (15) (2010), 2787-2805. [12] 3D Creation Lab, About 3D Creation Lab, (Accessed on September 19, 2013), http://www.3dcreationlab.co.uk/about_3d_printing_services.php [13] Shapeways, About Us, (Accessed on June 22, 2013), http://www.shapeways.com/about?li=footer [14] PhotoBox, About Us, (Accessed on June 18, 2013), http://www.photobox.co.uk/content/about-us [15] CreateSpace, About Us, (Accessed on September 10, 2013), https://www.createspace.com/AboutUs.jsp [16] MFG.com, About MFG.com, (Accessed on June 18, 2013), http://www.mfg.com/about-mfgcom [17] 3Sourceful, About Us, (Accessed on June 18, 2013), http://rbhax.com/about [18] Quirky, Help How It Works, (Accessed on June 18, 2013), http://www.quirky.com/how-it-works [19] J. A. Erkoyuncu, C. Durugbo, and R. Roy, Identifying uncertainties for industrial service delivery: a systems approach, International Journal of Production Research 51 (21) (2013), 6295-6315. [20] S. Samson, J. A. Reneke, and M. M. Wiecek, A review of different perspectives on uncertainty and risk and an alternative modeling paradigm, Reliability Engineering & System Safety 94 (2) (2009), 558-567. [21] D. W. Hubbard, How to Measure Anything: Finding the Value of “Intangibles” in Business, 2nd ed, John Wiley & Sons, Inc., New Jersey, USA, 2010. [22] S. Ward, and C. Chapman, Transforming project risk management into project uncertainty management, International Journal of Project Management, 21 (2) (2003), 97-105. [23] Project Management Institute, A Guide to the Project Management Body of Knowledge (PMBOK Guide), 5th ed, Project Management Institute, USA, 2013. [24] ISO/DIS 31000. Risk management - Principles and guidelines on implementation. International Organization for Standardization, 2009. [25] D. H. Oughton, A. Agüero, R. Avila, J. E. Brown, D. Copplestone, and M. Gilek, Addressing uncertainties in the ERICA Integrated Approach, Journal of environmental radioactivity 99 (9) (2008), 1384-1392. [26] J. Raftery, Risk analysis in project management, 2nd ed, Taylor & Francis, London, 2003. [27] Cloud Security Alliance (CSA), Top Threats to Cloud Computing, Cloud Security Alliance (CSA), 2010. [28] M. Hogan, F. Liu, A. Sokol, and J. Tong, NIST Cloud Computing Standards Roadmap, NIST Special Publication 500-291, 2011. [29] T. Haeberlen, and L. Dupré, Cloud Computing: benefits, risks and recommendations for information security, European Union Agency for Network and Information Security (enisa) Publication, Greece, 2012. [30] European commission, Communication from the commission to the European parliament, the council, the European economic and social committee and the committee of regions, European commission, Brussels, Belgium, 2012. [31] D. C. Wyld, and R. Maurin, Moving to the cloud: An introduction to cloud computing in government, E-Government Series, IBM Centre for the Business of Government, 2009. [32] Intel IT Centre Survey, What's Holding Back the Cloud? , Intel IT Centre, 2012. [33] N. A. Ogunde, and J. Mehnen, Factors Affecting Cloud Technology Adoption: Potential User's Perspective, in Li, W. and Mehnen, J. (eds.) Cloud Manufacturing (2013), Springer, London, 77-89. [34] W. Kim, S. Kim, E. Lee, and S Lee, Adoption issues for cloud computing, in: Proceedings - the 7th International Conference on Advances in Mobile Computing and Multimedia (2009), 2-5. [35] S. Marston, Z. Li, S. Bandyopadhyay, J. Zhang, and A. Ghalsasi, Cloud computing - The business perspective, Decision Support Systems 51 (1) (2011), 176-189. [36] S. Sudha, and V. M. Viswanatham, Addressing security and privacy issues in Cloud Computing, Journal of Theoretical and Applied Information Technology 48 (2) (2013), 708-719. [37] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, Official Journal 281(1995), 31 – 50. [38] S. Ramgovind, M. Eloff, and E. Smith, The management of security in cloud computing, in: Proceedings – ISSA Conference on Information Security for South Africa (2010), 1-7. [39] N. A. Sultan, Reaching for the “cloud”: How SMEs can manage, International Journal of Information Management 31(3) (2011), 272-278. [40] A. Khajeh-Hosseini, D. Greenwood, J. Smith, and I. Sommerville, The cloud adoption toolkit: Addressing the challenges of cloud adoption in enterprise, Cornel University Library, [Online] (2010), available at: http://arxiv.org/abs/1003.3866.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-307
307
Service-Oriented Architecture for Cloud Application Development Hind BENFENATKIa, Gavin KEMPa,1, Catarina FERREIRA DA SILVA, AïchaNabila BENHARKATa and Parisa GHODOUSa a University of Lyon 1, INSA - Lyon, LIRIS, CNRS, UMR5205, F-69621, France
Abstract. Software engineering used several approaches for the development of application such as service oriented approaches. Nowadays, with the advent of cloud computing and the convergence toward “Everything as a Service”, application development is moving to a new paradigm, abstracting the underlying architecture and infrastructure. The literature does provide some work describing frameworks and architectures for cloud software development, but not one that covers the whole application development lifecycle. Furthermore, these papers are mainly dedicated to developers and do not provide a business stakeholder a method or an easy to use service to deploy their business application without the help of an IT-professional. Our work fits into the perspective of defining a ServiceOriented Architecture for Cloud Application Development. The architecture we propose is designed for non-IT professional users. It avoids the huge technical background needed for cloud application development by automating the process of development; avoids PaaS dependency and advocates the implicit collaboration by reusing and composing services. This article will give a proposed architecture for this objective as well as an example of its implementation. Keywords. Cloud Computing; Business applications development; Requirement expression; Linked services; Services reuse
Introduction The increasing complexity of software systems and the constant expansion of new requirements require the cooperation of many professional skills. With the web 2.0 and cloud computing, collaboration through the use of third party services has emerged; indeed, the composition of services allows implicit collaboration between different software entities and thus between different partners. Web and cloud services are a popular medium for application development and deployment on the cloud. Modern enterprises are moving towards cloud serviceoriented architectures to promote reuse and interoperability of services; and to benefit from the cloud computing advantages, such as small initial investment, no license acquisition, accessibility from everywhere and every time, high availability and so on. Cloud applications are nowadays developed in Platforms as a Service (PaaS) and deployed on virtual infrastructures. Cloud applications are referred as Software as a Service (SaaS) which are service oriented, and distributed. Application development is different from one PaaS to another. In fact, each PaaS offers several Application 1
[email protected]
308
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
Programming Interfaces (APIs) and has its own architecture for storing data and deploying instances. The underlying infrastructure is abstracted from the user. Several research work describing cloud application development are mainly dedicated to developers, and do not allow a non IT-professional to develop application. In this paper, we describe architecture for business application development for cloud environments allowing business stakeholders to proceed to automatic development which promotes service reuse. The service discovery and composition processes are done from user’s request that describes functional and non-functional business application requirements. Functional requirements describe service features. Nonfunctional requirements describe user preferences and Quality of Service (QoS) parameters. We describe the implementation of our approach. The rest of this paper is organized as follows. Section 1 describes the work related to existing cloud software development approach. Section 2 presents the architecture of our approach and section 3, its implementation. Section 4 draws final conclusions and describes our future work.
1. Related Work In cloud computing paradigm, there is a lack of complete application development methodologies. However, several partial approaches for the development of applications exist in literature. In [1], the authors propose an approach that uses Domain Specific Languages (DSL) within the process of development and deployment of software on the cloud. The main inconvenient with this approach is the huge time that consumes the DSL development in early phase of their approach. In [2], the authors describe a methodology for cloud-native application design, which considers CAP (Consistency, Availability and network Partitioning tolerance) parameters, and present a framework instantiating this methodology. The main lack of this methodology is that it focuses only on the CAP properties to the detriment of QoS parameters (such as response time and security indicators) for designing an application and choosing cloud services and does not describe how the development and deployment of the application are done. Giove and colleagues [3] propose a library called CPIM (Cloud Provider Independent Model) offering PaaS level services such as message queues, noSQL services, and caching service, abstracting from the details that are specific of the underlying PaaS provider; and allowing an application developer to implement his application in a PaaS independent way. At deployment time, the developer specifies the PaaS to be used. At runtime, CPIM library acts as a mediator between the application code and the services offered by the PaaS. Our work will reuse and integrate this interesting approach for developing undiscovered services. In [4], the authors propose the MODACLOUDS system, a European project [5] that uses the principle of MDD (Model Driven Development) for the development of applications on the cloud. Applications are designed at a high level of abstraction of the target cloud, making them capable of operating on multiple cloud platforms. The main lack in this work is that the cloud services selection is not taking into consideration the platforms APIs and services in the process of choosing the best PaaS cloud providers, but only their QoS parameters.
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
309
The work proposed by [6] describes a SaaS Development Life Cycle (SaaSDLC). The authors present an approach that promotes evaluation of the cloud provider based on capabilities of a platform. The SaaSDLC does not consider reuse of cloud services. It promotes the development to a specific platform, making application portability more difficult. In [7], the authors advocate the intervention of cloud provider in the Agile eXtreme Programming software development process, especially in planning, designing, building, testing and deployment phases to mitigate the challenges associated with cloud software development, and make it more advantageous. In this paper, the authors integrate the notion of roles for the various stakeholders in the agile development process for cloud applications, but do not consider the other characteristics of cloud applications that can influence the development process. In [8], the authors describe Service-Oriented Software Development Cloud (SOSDC), a cloud platform for developing service-oriented software and a dynamic hosting environment. The SOSDC adopts an architecture covering the three levels of cloud services. The IaaS level is primarily responsible for providing infrastructure resources. The PaaS level provides App Engine for testing, implementing and monitoring the deployed application without having to consider the technical details. SaaS level aims to provide "Online Service-Oriented Software Development Environment" and includes the two following modules: Xchange – a service supporting shared web services – and MyCloud, a personal development environment for each developer. Once an application is built, the developer may request an App Engine hosting environment by specifying the deployment requirements. This approach aims to supply a dynamic development environment by providing on demand appliance for developers, but it is dedicated to a specific platform and does not exploit public cloud platforms. In summary, the state of the art analysis shows that most approaches in the area of cloud application development are dedicated to developers. The approach we propose in the next section, (i) obeys the SOA principles and technics that promote the reusability, the loose coupling and the composability of the underlying Everything as a Service (XaaS), (ii) maintains interoperability through the use of cloud services and the modelling of functionalities to be developed; (iii) meets the requirements of the distributed nature of cloud; (iv) aims to make software development more accessible for non IT-professionals; and (v) is independent of a specific platform.
2. The Proposed Architecture This section describes our architecture for cloud application development (Figure 1). This architecture follows the MADONA's methodology (Methodology for semiAutomatic Development of clOud-based busiNess Applications) [9] which is based on SOA principle and covers the whole application development lifecycle, from the requirements expression to the tests and validation phases. It combines service discovery and composition, with service development using cloud platforms, when the discovery process does not return a service meeting the user’s requirements. We use a cloud service orchestration tool which allows an easy deployment, and dependencies management of the deployed services. The approach we propose reduces cloud provider dependency, by reusing cloud business services, and
310
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
developing undiscovered services with MDD abstracting cloud platforms constraints. The primary goal of our approach is to allow a business stakeholder to automatically develop a cloud-business application simply by describing his/her business requirements via a web form. 2.1. Project Management The business stakeholder enters his requirement in a web form to generate a file, based on Linked-USDL [10], [11], [12], describing functional and non-functional information on the needed cloud business application we called .rival.
Figure 1. Architecture for cloud application development
2.2. Discovery as a Service The service discovery consists of matching the user’s requirements with the cloud marketplace’s services. The user’s requirements are expressed via .rival files, and marketplace’s services are described using .usdl files based on Linked USDL principles. The marketplace’s services description include the following information : (i) the service name, (ii) the service description, (iii) the service classification (SaaS, PaaS, IaaS), (iv) the hard composition constraints, i.e. the specific services that must be composed with the one described, e.g. WordPress has MySQL as an imposed database to function, (iv) the soft composition constraints, i.e. the family of services that have to be composed with the one described, e.g. SugarCRM has to be composed with a database but has no imposed database, thus it can be composed with, for example, MySQL or Oracle, (vi) the composition possibility, i.e. the services that can be composed with the one described, e.g. a CRM can be composed with a mailing service. 2.2.1. SaaS discovery: SaaS discovery consists of discovering a SaaS from the marketplace meeting the user’s requirements. Like illustrated in the Figure 2, the SaaS discovery follows these steps: first, we check if the stakeholder has a preferred provider for a given service, if so, we
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
311
select the desired function supplied by this same provider, else, we check respectively the matching between requirements and services according to (i) user preferences, namely, service location and purchase details, (ii) supplied functions and composition constraints, and (iii) QoS requirements. If the deployment of matched services requires the deployment of another service, the SaaS discovery process restarts for the service that has to be composed with the one matched the user’s desired function; e.g. in the context of the CRM SaaS, when matching several CRM services, we note (from the .usdl file) that we have to compose a database with the CRM service. The database service discovery is therefore performed. Our SaaS marketplace is represented by the services of a cloud services orchestration tool, because it simplifies the dependencies management, and the deployment of services. For our preliminary tests, we use “Juju” [13], a cloud services orchestration tool, which allows the use of charms for the deployment and dependencies management of supplied services. We describe the “Juju” marketplace services using .usdl files which will be used for matching user’s requirements.
Figure 2. Service Discovery process
2.2.2. PaaS discovery: PaaS discovery consists of discovering a PaaS for the deployment of a resulted cloud business application and/or for the development and the deployment of undiscovered services i.e. when no matched service is found for a desired function. The PaaS discovery (Figure 2) for the development and the deployment of undiscovered services is done by matching the user requirements (.rival files) with several PaaS description (.usdl files) according to user preferences; APIs offered by PaaS and QoS requirements. The PaaS discovery for the deployment of the resulted cloud business application is done by matching the user requirements (.rival files) with several PaaS description (.usdl files) according to the user preferences, the service orchestration tool supported by the PaaS (allowing an easy service composition), and the QoS requirements. 2.2.3. IaaS discovery: A cloud infrastructure is selected if the PaaS discovery process for the deployment of the business application does not return a matched platform supplying the needed
312
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
services. The automatic IaaS selection is performed according to user’s preferences and QoS requirements. We consider matched marketplace’s services for a given desired service as equivalent. We rank matched services according to QoS indicators (Response time, Availability, Accessibility, Security, etc.) and assigned QoS coefficients. In fact, the user should assign coefficients to QoS parameters based on its own priorities. We consider a “history of service invocation” that provides us service QoS parameters according to previous service invocation. The service ranking and selection is described in the next section. 2.3. Service selection : The service ranking is calculated based on the coefficients associated to QoS attributes and assigned by the user based on its priorities such as the sum of all the coefficients equals 10. The service with the highest rank will be selected. Two scenarios are available: Let Si be a service and Qi a QoS indicator. ܴ௨ ܴሺܵ ǡ ܳ ሻ ൌ ൜ (1) ܴ௪ Case 1: the higher the value of the attribute is, better is the service, for instance, the service availability. In this case the rank associated with this attribute for a given provider is calculated as follows: ܴ௨ ൌ
௨ ெ௫
ݐ݂݂݊݁݅ܿ݅݁ܥ כ
(2)
Where: Value is the value of the attribute for a given provider. Max is the maximum value of the attribute among all providers. Coefficient is the coefficient previously assigned to the attribute by the stakeholder. Case 2: the smaller the value of the attribute is better is the service, for instance, the response time. In this case the rank associated with this attribute for a given provider is calculated as follows: ௨
ቁ ݐ݂݂݊݁݅ܿ݅݁ܥ כ (3) ܴ௪ ൌ ቀͳ െ ெ௫ Let R(Si) be the global ranking regarding the whole indicators for a service S i. (4) ܴሺܵ ሻ ൌ σୀଵ ܴሺܵ ǡ ܳ ሻ For equivalent services, the service with the highest rank is selected. The service discovery continues even after deployment to allow generating new compositions that can be better that the one deployed. 2.4. Service Development as a Service Development of undiscovered services, i.e. no service has been discovered for a desired function, are developed using a MDD approach following four key steps: x Modelling: Undiscovered services are modelled with UML notation abstracting the deployment PaaS making the modelling reusable and independent from PaaS. x Code generation: A PaaS platform is selected for business service deployment according to the PaaS selection method described in section (2.2.2). PaaS dedicated code is generated from UML diagrams. x Coding: The generated classes have to be completed in order to achieve the desired functionality. This has to be done with the intervention of a developer.
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
x
313
Deployment: At execution time the developed service is deployed on a preselected PaaS, so that it can be invoked.
2.5. Composition as a Service We compose selected and developed cloud services. We use a cloud service orchestration tool which allows an easy deployment and dependencies management of services. Before the composition is done, we analyse the composability of selected services starting by those having the highest rank. The composability study takes into account the services that have to be composed, and the ones that can be composed with the given service. The composition constraints and possibilities are described for every marketplace’s service throw the .usdl file. With our approach, these constraints are depicted in the service description (.usdl file) and not in the user requirement in order to automate the services dependencies management, and avoid the user to detail his requirements. The composability study helps us to generate the composition workflows and their corresponding scripts by considering affinities and constraints between services, and global criteria like maximum cost of the application deployment. The role of the generated script is explained in the section (2.6). Several versions of workflow composition are stored in VCS (Version Control System). The composition with the highest rank is deployed on a preselected PaaS or IaaS. We reserve the other versions in the event that the stakeholder does not validate the deployed business application after tests. For the selected workflow, several web interfaces allowing the configuration of the generated application are displayed to the user so that he can personalize the application by integrating information related to its business such as choosing a logo or a name for his service. 2.6. Automatic deployment The deployment process concerns the deployment of the resulted business application composing discovered and developed services. Two types of deployment are considered: 1) On preselected PaaS: The deployment is done by injecting a script deploying the composed services involved in the selected workflow. 2) On preselected IaaS: The deployment is done by injecting a script installing the orchestration services environment and deploying the composed services involved in the selected workflow. For both cases, the script corresponds to a dedicated cloud services orchestration tool (”Juju” in our case) command lines allowing services manipulation (deployment, dependencies management). ”Juju” environments can be bootstrapped on many clouds: Amazon [14], OpenStack [15], and so on. A script specific to the platform can be generated. Redeployment can occur after the tests and validation phase, if the stakeholder does not validate the resulted business application after the tests (performance and conformity) have occurred. In this case, allocated resources for the previous deployment of the business application are freed and another composition is deployed.
314
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
2.7. Tests and Validation of the deployed business application The validation is done by the business stakeholder after testing the deployed business application. Two types of test are considered: performance tests and conformity tests. Performance tests are done automatically using Gatling tool [16], an efficient open source load testing tool. Conformity tests are done by business stakeholder, where he tests the correspondence between his/her requirements and the resulted business application. After tests, the stakeholder notifies to the system his/her positive or negative validation result. If the validation result is negative, another composition from the VCS is deployed, the tests are performed, and the stakeholder has to notify his/her validation results. This cycle is repeated until the stakeholder satisfaction is achieved or no other composition is possible.
3. Implementation of our architecture The implementation of our approach is done using Grails framework and a MVC (Model View Controller) architecture, respectively coded in java, gsp and groovy. 3.1. Project management: The Business stakeholder enters, on a web form, the description of the needed service; the requirement of location, currency, price, payment, provider and QoS. The .rival file is generated using the jena API. This file is then stored for future use or is immediately read for the service discovery. Discovery as a Service: The service discovery consists of extracting the requirement information from the .rival file using SPARQL query, service provided by the jena-arq API; then adding the outputs to a new SPARQL query and applying it to all the known .usdl service description files; then selecting only those that return a value to our request. This second SPARQL request returns also the hard constraints and soft constraints needed for the service composition. 3.2. Composition as a Service: Services, often, do not work alone and have to use other services to work. Thus WordPress must work with a MySQL database; this is defined as a hard constraint because this is an imposition from the WordPress service; on the latter SugarCRM must have a database; this is defined as a soft constraint because you have a choice for your database e.g. MySQL or Oracle. If the stakeholder needs a blog engine (Figure 3), the .rival file is generated and is matched to all known .usdl files. This returns WordPress and BlogEngine.NET both use hard constraints, thus the information extracted from the constraints goes throw a strict comparison with usdl:name of the .usdl files to return MySQL for WordPress and Oracle for BlogEngine.NET. This process is repeated as long that there are constraints needed. If the stakeholder needs a CRM service (Figure 4), the .rival file is generated and is matched to all known .usdl files. This returns SugarCRM and VTigerCRM which both use soft constraints, thus the information extracted from the constraints goes throw a
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
315
flexible comparison with usdl:hasDescription of the .usdl files and thus both return MySQL and Oracle. This process is also repeated as long that there are constraints needed.
Figure 3. Composition tree for a blog engine
Figure 4. Composition Tree for a CRM service
From the Service composition, several workflows can be generated. These workflows need to be ranked to select the best depending on the user QoS requirements. 3.3. Service Ranking:
Figure 5. Ranking the workflows for a blogging engine
Once the workflows are generated, the QoS files of the individual services in a workflow are read then added to the total scoring of each workflow. Each QoS parameter is scored according to equations defined in section 2.3. In the case of the blogging engine, the QoS requirements are defined as: Availability= 2, Response Time= 1, data loss= 3, data privacy = 4. Availability and data privacy will use equation (2) since the higher the score the better and response time and data loss will use equation (3).
316
H. Benfenatki et al. / Service-Oriented Architecture for Cloud Application Development
From Figure 5, we observe a better quality for the second workflow using BlogEngine.NET. The workflows are sorted according to their rank and stored in case of negative validation results. A script deploying and connecting all needed services with juju is generated for the best workflow.
4. Conclusion and Future Work In this paper we presented the architecture and implementation of our approach for cloud application development, an agile approach of service discovery and deployment designed for those with little background knowledge on cloud based services. This means that, the people closest to the business project can deploy the cloud services they need with little intervention of an exterior IT professional. Our approach uses services, described using Linked-USDL, available for service orchestrator such as Juju for automatic deployment. Future developments include mainly the implementation of a charm [15] creation tool and the performance tests scenarios generation according to the Gatling tool template.
References [1] K. Sledziewski., B. Bordbar and R. Anane, A DSL-based Approach to Software Development and Deployment, 24th IEEE International Conference on Advanced Information Networking and Applications, 2010 [2] V. Andrikopoulos., C. Fehling and F. Leymann, DESIGNING FOR CAP: The Effect of Design Decisions on the CAP Properties of Cloud-native Applic0ations, CLOSER 2012. 2nd International Conference on Cloud Computing and Services Science. Proceedings, 2012 [3] F. Giove., D. Longoni., M. Shokrolahi Yancheshmeh., D. Ardagna and E. Di Nitto, An Approach for the Development of Portable Applications on PaaS Clouds., CLOSER 2013 - 3rd International Conference on Cloud Computing and Services Science, 2013 [4] D. Ardagna., E. Di Nitto., G. Casale., D. Petcu., P. Mohagheghi., S. Mosser., P. Matthews., A. Gericke., C. Ballagny., F. D’Andria., C.-S. Nechifor and C. Sheridan, «MODACLOUDS: A Model-Driven Approach for the Design and Execution of Applications on Multiple Clouds,» MiSE 2012, 2012 [5] MODACLOUDS, Available : www.modaClouds.eu/ [6] H. Kommalapati and W. H. Zack, «The SaaS Development Lifecycle,» Available: www.infoq.com/articles/SaaS-Lifecycle., 2011 [7] R. Guha and D. Al-Dabass, impact of Web 2.0 and Cloud Computing Platform on Software Engineering,» IEEE, International Symposium on Electronic System Design, 2010 [8] H. Sun., X. Wang., C. Zhou., Z. Huang and X. Liu, Early Experience of Building a Cloud Platform for Service Oriented Software Development, 2010 IEEE International Conference on Cluster Computing Workshops and Posters (CLUSTER WORKSHOPS), 2010 [9] H. Benfenatki, C. Ferreira Da Silva, N. Benhartka, and P. Ghodous. Cloud Application Development Methodology, IEEE/WIC/ACM International Conference on Web Intelligence, 2014 [10] Linked USDL, Available: www.linked-usdl.org/ [11] LinkedData, Available: linkeddata.org/ [12] C. Pedrinaci, J. Cardoso, and T. Leidig, Linked USDL: A Vocabulary for Web-scale Service Trading, In 11th Extended Semantic Web Conference (ESWC), 2014 [13] Juju, Available: juju.ubuntu.com/ [14] Amazon EC2, Available: aws.amazon.com/ec2/ [15] OpenStack, Available: www.openstack.org/ [16] Gatling Tool, Available: gatlingtool.org/ [17] Juju chams, Available: https://juju.ubuntu.com/docs/charms.html
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-317
317
Extending BPMN for Configurable Process Modeling Hongyan ZHANG a,1, Weilun HAN a, and Chun OUYANG b School of Software Engineering, Beijing Jiaotong University, Beijing 100044,China b Faculty of Science and Technology, Queensland University of Technology, Australia a
Abstract. Configurable process modeling provides a key approach to capture possible process variations into one (reference) model on the one hand and to retrieve individual process variants through configuration of the model on the other hand. BPMN, a standard for business process modeling and a mainstream language being widely adopted in practice, lacks the configurable modeling capability. In this paper, we propose an extension to BPMN to support configurable process modeling with a focus on control-flow perspective. Using configurable workflow net as the theoretical foundation, we formally define the semantics of the proposed extension to BPMN, its correctness-preserving conditions, and its configuration semantics. We name the resulting language Configurable BPMN, i.e. C-BPMN, and provide a running example to illustrate how C-BPMN supports configurable process modeling as well as process configuration. Keywords. Configurable process modeling, Configuration semantics, configurable reference model, BPMN
Introduction With the rising of computation abstract level, business process has become both the organizational asset and computational object, which builds the bridge between business system and information system and helps them tightly integrated. In the contemporary of “big program”, the programming technology development is experiencing some innovations focusing its computation on algorithm logic complexity to business logic complexity, and wants to build a Turing machine which can directly understand business logic and process in a visible way. However, the existing languages of business process modeling are normally weak in the capability of multiscenario description and seldom to support configurable process modeling. Even though some of them support it, the problem to retrieve individual process variants through configuration of a reference model is still not to be solved. These problems above heavily take influences on process reuse in design phase and additionally, stiff process logic brings heavy burdens of maintenance to its corresponding information systems during their execution. The root of the problems is due to lack of techniques to support context-awareness information system and adaptable programming or modeling languages [1, 2]. In these cases, configurable and adaptable process modeling
1
Corresponding Author.
318
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
techniques are needed to enhance the capability of process reuse and the degree of information system flexibility. Even though reference model technique [3, 4] invented can improve the capability of process reuse in some extent through supporting multi-scenario description, it doesn’t change modeling languages themselves. Process modeling languages still can’t support to capture possible process variations into one reference model and retrieve the individual process variant which best meets the specific requirements from context. In this case, a reference model is normally larger in size, more complex in logic and lower efficient in execution than its individualized model [5]. So, there is necessary to build a computation environment which supports configurable process modeling. The environment has three fundamental elements: 1) a configurable process modeling language; 2) a modeling tool supporting configurable process modeling; 3) a Turing machine which can understand the configurable process model and customize it into an configured it according to specific requirements. To build this environment, two theoretical problems as followed should be solved. BPMN, as a standard for business process modeling language and a mainstream language widely adopted by industry, hasn’t had its configurable modeling language until now. So, it is meaningful and valuable for the paperwork to solve the theoretical problems which are critical and useful to build a configurable BPMN modeling environment. In this paper, the related works on the research are, firstly, introduced to help readers better understanding of the paper’s work and its results, and then the research approach is put into discussion so that a clear problem-solving path of this technical field will be determined. Based on the formal definition of BPMN and process configuration semantics, configurable BPMN (C-BPMN) will be defined in the aspects of syntax and semantics; Model correctness verification and validation are very important for language creation or improvement; the paper provides the correctnesspreserving conditions and constrains for C-BPMN. At last, a running example is made to demonstrate how C-BPMN supports configurable process modeling as well as process configuration.
1. Related Work There are formal and non-formal two types of process modeling languages. Yawl [6] and Petri net [7] are typical formal languages, UML, EPC and BPMN belongs to nonformal type [8-10]. Compared with other process modeling languages, BPMN is richer of process modeling expression not only in semantics but also in syntax. Configurable modeling and its individualization mainly used in design phase, which is before model execution [5]. By adding a configuration session into the whole life cycle of process engineering, a reference model can be automatically customized into an individual process according to specific requirements. The configured model is slimmer than its reference model not only in size but also in logic complexity. In the case, the corresponding information system of the configured model will be executed more efficiently than the original reference model. Nowadays, configurable modeling techniques become a popular field in academic research. The paper [5, 11-14] introduced the configurable extension solutions to EPC and YAWL, which including C-EPC, C-iEPC, ADOM-EPC and C-YAWL, the authors
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
319
of the paper also proposed a configurable extension to BPMN called C-BPMN [15] on the perspective of control flow. According to the papers [16, 17], two fundamental approaches to extend process modeling languages can be summarized: Extending modeling languages through building configurable nodes. C-EPC [5] is a configurable modeling language of EPC, which provides configurable statement through transforming EPC’s Function and Connector nodes into configurable ones; Based on the definition of C-EPC, ADOM-EPC[13] builds a configurable extension by adding configurable attributes into Event entity and building configurable event node in a model. C-iEPC [11, 12] is the successive result of iEPC [18], which provides a resource-oriented and object-oriented configurable solution through building configurable Role and Object nodes in EPC models. A configurable extension to BPMN (called C-BPMN) was recently proposed in the paper [15], the solution provided a formal definition and appropriate syntax correctness-preserving conditions for configurable BPMN. However, it regrets that the formal definition is only a syntaxoriented static description without dynamic semantics. Extending language based on hiding and blocking. C-YAWL [14] is a configurable extension to YAWL, which changes YAWL in this approach by adding the attributes of input and output port into Action entity and setting the port status with hiding and blocking. So, C-YAWL supports to retrieve an individual process variant from its reference model by configuring Action node with three statuses as follows: Normal Execution, Blocking then Execution, Skip then Execution. Even though many mainstream languages have had their configurable modeling solution based on language’s extension, it is still blank in the research of process configuration semantics and their individualization computation. In this paper, we will develop a renewed version of configurable BPMN based on the configuration semantics and provide C-BPMN model with the correctnesspreserving conditions as well as individualization algorithm. Finally, a test case with the logic coverage of seven types of configuration patterns will be done to demonstrate the algorithm to support complex process logic.
2. Approach of Research A programming or modeling language supporting one kind of metadata object is usually developed along the three dimensions: time, space and context. The results from the research along space dimension usually help creating or improving the language’s entities and their syntax based upon the insight into its computational object ontology; the research along time dimension mainly focuses on the behavior semantics of metal object, and its results will help enhance language’s capability in dynamic semantics description. As to process modeling languages, the change normally happens on their syntax definitions. The research along context dimension often focuses on the communication between language and its context; the results can help building adaptable languages to construct a context-awareness information system. The contents of the research along space and context dimensions are out of the scope of this paper’s work, we only focus the paper’s work on the extension to BPMN based on configuration semantics along time dimension, that means we need to transform BPMN into C-BPMN through implementation of hiding and blocking -- the fundamental operations of process configuration semantics ,define the correctness-preserving
320
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
conditions of C-BPMN’s syntax and the correctness-preserving constrains of C-BPMN model’s execution semantics, and finally provide appropriate individualization algorithm based on the definition of configured C-BPMN model and configuration semantics. The details are explained below: Step1: Extend BPMN into C-BPMN based on process configuration semantics. The precise definition of process configuration semantics can be achieved through the study of configurable workflow net called C-WF net, hiding and blocking are the fundamental operations of the semantics. Mapping BPMN model into appropriate CWF net, C-BPMN, the configurable solution to BPMN, can be found through implementation of the two fundamental operations. Step2: Validate C-BPMN model’s syntax correctness in terms of the correctnesspreserving conditions of BPMN syntax. Verify C-BPMN model’s semantics correctness through verifying that its equivalent induced Petri net is a C-WF net, and the C-WF net’s semantics is correct according to its correctness-preserving constrains of execution semantics [19]. Step3: Develop the individualization algorithm for C-BPMN based on process configuration semantics. The syntax correctness validation of its result -- the configured model can be done following the correctness-preserving conditions of CBPMN. The semantics correctness verification of the configured process is still a problem needed to be solved in the next paper. Step4: A process with the coverage of seven types of configuration patterns [20] will be chosen as a running example to demonstrate how C-BPMN supports configurable process modeling as well as configuration semantics.
3. Configurable BPMN(C-BPMN) Even though there already existed a formal definition of C-BPMN in the previous research [5], it was still only a static definition without dynamic semantics. In this section, a renewed formal definition of C-BPMN will be discussed based on hiding and blocking semantics [16] as well as the original formal definition of BPMN [15]. 3.1. Syntax of C-BPMN Before introducing C-BPMN syntax, it is needed to introduce BPMN at first. As a mainstream process modeling language in industry, BPMN provides a set of graphic notations for business process modeling. Business Process Diagram (BPD), a kind of flowchart discussed in graphic theory, provides the formal description for BPMN. The elements of BPD belong to the subset of BPMN’s elements, which only consists of the core elements of BPMN including of Event set with two special instance Start and End, Activity set and Gateway set. Start even is the node to the beginning of process, End event is the node representing the end to a process. Other events are out of the scope that the paper wants to discuss. Activity set has two types of elements: Task and Sub process. A task is an atomic activity and represents a work to be performed within a process. Sub-process is out of the contents that the paper wants to discuss. Gateway is a kind of routing construct used to control the divergence and convergence of sequence flow. There are four main types of gateways, they are parallel fork gateway (AND-split), parallel join gateway (AND-join), data-based XOR decision gateway
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
321
(XOR-split), XOR merged gateway (XOR-join). The other types of gateway are out of the paper scope. A core BPMN process using the core subset of BPMN elements can be completely formalized as a BPD. First we define the syntax of a core BPMN process. Definition 1 (Core BPMN Process). A core BPMN is a BPMN = (O, T, E, G, C, F) where: x x x x
x x x x x
O is a set of object which can be partitioned into disjoint sets of tasks T, events E, and gateways G, i.e., O = T Ĥ E ĤG, tęT is a finite (non-empty) set of tasks, E is a finite (non-empty) set of events, can be partitioned into disjoint sets of Start events ୗ , End events and Intermediate event ୍ , G is a finite set of gateways, can be partitioned into disjoint sets of parallel fork gateways , parallel join gateways , data-based XOR decision gateways ୈ , and XOR merge gateways , i.e., ת תୈ = תФ and
ୈ = G, CאG→{ר, XOR, }שis a function which maps each gateway onto a control logic, { = רgęGפC = ġ}, ଡ଼ୖ = {gęGפC = XOR}, { = שgęGפC =Ģ}, = {gאGפinput(g)≥2} is the set of join gateways, ୗ = {gאGפoutput(g)≥2} is the set of split gateways,
= ୗ ∩ ר, = ∩ ר, ୈ = ୗ ∩ଡ଼ୖ , =∩ଡ଼ୖ . F كO×Ο is the control flow relation.
The paper [15] gave C-BPMN a formal definition with configurable task and configurable gateway through extending the core BPMN entities in configurable ones. Configurable task may be set as ON, OFF and OPT. A configurable gateway can be mapped onto a concrete Choice gateway, which represents the logic construct of split or join considered even can be configured to a sequence. Definition 2 (Configurable BPMN Process). A configurable BPMN is a BPMN = (O, T, E, G, ୗ , , , ୈ , , , F, େ , େ , େ ) where: x x x x
O, T, E, G, ୗ , , , ୈ , , and F are specified in Definition 1, େ كT→{ON, OFF, OPT} is a set of configurable tasks,
େ كG →{CT} is a set of configurable gateways, CT = {ġ, XOR,Ģ}CTS, CTS = { ୬ פn ęTĤEĤG }, େ is a configuration requirements.
3.2. Semantics of C-BPMN The operations of hiding and blocking in C-WF net represent the semantics of process configuration, which can be used as a foundational framework to extend BPMN into CBPMN. Figure1 shows a relationship between C-BPMN and C-WF nets. The first column represents the type of configuration operations; the 2nd column contains a C-BPMN model with a configurable node represented by a double-line rectangle. The model for the 3rd column is the configured result corresponding to the model in the 2nd column; In the 4th column, a semantics-equivalent C-WF net with the C-BPMN model in the 2nd column is displayed, the model in the 5th column is the configured result of the C-
322
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
WF net. The Configured WF net is semantics-equivalent with the configured BPMN; it is the semantic net of the corresponding configurable BPMN. C-BPMN
Configured BPMN
Configurable WF- Configured WFnets nets
Configurable Task is OFF
Configurable Task is OPT
Configurable gateway restricted to sequence
Configurable gateway remained before
Figure 1. Relationship between C-BPMN and C-WF nets
The first row in Figure1 depicted configurable task mapping to C-WF nets. The task within the C-BPMN process fragment is switched “OFF”. This confirms to a hidden transition within the corresponding C-WF nets. The second row also depicted configurable task mapping to C-WF nets. The task within the C-BPMN process fragment is switched “OPT”. This means an option hidden transition corresponding C-WF nets. The third row in Figure 1 depicted configurable gateway mapping to C-WF nets. Configurable XOR-split gateway restricted to a sequence, this means a blocked transition corresponding C-WF nets.
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
323
The fourth row also depicted configurable gateway mapping to C-WF nets. Configurable XOR-split can be configured to an XOR gateway. In fact, this confirms to an option blocked transition within the corresponding C-WF nets.
4. Correctness-Preserving of C-BPMN C-BPMN, as a new language, is different from its original language BPMN, the correctness validation and verification in syntax and execution semantics are both needed to it. In this section, the correctness-preserving of C-BPMN will be discussed with respects to the conditions for its syntax validation, and the constrains for its semantics verification. 4.1. Syntactical Correctness The paper [21, 22] described the correctness-preserving conditions of BPMN syntax as follow: Definition 3 (Well-formed BPMN Process). A core BPMN Process given in Definition 1 is well formed if relation F satisfies the following requirements: x x x x x
s אୗ , input(s) = Ф פרoutput(s) = פ1, i.e. start event have an indegree of zero and an outdegree of one. e א , output(s) = Ф פרinput(s) = פ1, i.e. end event have an indegree of zero and an outdegree of one. g א ୈ : פinput(g) = פ1 פרoutput(g) >פ1, i.e. fork or decision gateways have an indegree of one and an outdegree of more than one, g א: פoutput(g) = פ1 פרinput(g) >פ1, i.e. join or merge gateways have an indegree of one and an outdegree of more than one, x אO, ( s, e) אୗ × , sF*xרxF*e, i.e. every object is on a path from a start event to an end event.
Since C-BPMN only brings BPMN changes in entity attributes but not entities, So, C-BPMN doesn’t change the rules of BPMN syntax, and its model’s semantics correctness validation can be taken straight following the conditions of BPMN correctness-preserving. However, for the semantics of process configuration has been added into BPMN while C-BPMN model has no any behavior semantics, it’s impossible for C-BPMN to verify its semantics correctness straight on C-BPMN model itself. 4.2. Behavioral Correctness C-BPMN is ambiguous without behavior semantics; it is impossible to check the model for consistency and completeness semantics. WF-nets have formal semantics. It is sufficient to map BPMN onto WF-nets to specify the behavior unambiguously. The correctness-preserving results described in C-WF nets can be exploited to achieve the C-BPMN models, where each configuration step is soundness-preserving. Induced Petri net is a semantics-equivalent net of C-BPMN model, which can be obtained through mapping C-BPMN onto Petri net. The verification of C-BPMN model’s semantics correctness is equal to the correctness verification of its induced
324
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
Petri net. For there already existed the result about the correctness-preserving of C-WF net [19], So, the semantics correctness verification of C-BPMN model can be realized by means of verifying that its induced net is a configurable workflow net. Table 1 shows the basic strategy that is used to map BPMN onto WF-nets: Start event or End event correspond to a similar module with a place and a silent transition. A task corresponds to a transition named by the task with one input place and one output place. The translation of gateways is more complex than event and task. Gateways are mapped onto small Petri nets modules with silent transitions capturing their routing behavior. For example, the behavior of Parallel Fork (AND-Split) mapped onto a silent transition with one input place and more than one output place. Table 1. Mapping BPMN objects to WF-nets module BPMN Object
WF-nets Module
BPMN Object
Parallel Fork
Data-based XOR
(AND-Split)
(XOR-Split)
Parallel Join
XOR Merge
(AND-Join)
(XOR-Join)
WF-nets Module
The paper[19] provided correctness-preserving semantics of C-WF nets. To construct a WF-net from a C-BPMN model, we simply use the mapping from BPMN object to WF-nets module. We can formally define the induced Petri net as follows: Definition 4 (Induced Petri net). Let C-BPMN = (O, T, E, G, ୗ , , F, େ , େ , େ ) be a syntactically correct C-BPMN, PN(C-BPMN) = ( , , ) is the Petri net induced by C-BPMN such that: x x x
= E ୡאେ େ . = T ୡאେ େ . = (F ( תE×T)) ୡאେ େ .
where େ , େ , େ are defined as Table 2. Table 2. Configuration patterns in the C-BPMN
େ
େ
େ
g א תୈ
{ୡ୶ ̮xאinp ut(g)}
{ ୡ }
{(x, ୡ୶ )̮xאin(g)({ୡ୶ , ୡ୶ )̮xאin(g)} ({ୡ୶ ,x)̮xאin (g)}
g אୗ ת ୈ
{ୡ }
{ ୡ୶ ̮xאinp ut(g)}
{(x, ୡ୶ )̮xאin(g) ({ୡ୶ , ୡ୶ )̮xאin(g)}({ୡ୶ ,x)̮xאin (g)}
g א תଡ଼ୖ
{ୡ୶ ̮xאout put(g)}
{ ୡ }
{(x, ୡ୶ )̮xאout(g) ({ୡ୶ , ୡ୶ )̮xאout(g)}({ୡ୶ ,x)̮xא out(g)}
g אୗ ת ଡ଼ୖ
{ୡ }
{ ୡ୶ ̮xאinp {(x, ୡ୶ )̮xאout(g)({ୡ୶ , ୡ୶ )̮xאout(g)} ({ୡ୶ ,x)̮xא ut(g)} out (g)}
Lemma 1. Let C-BPMN = (O, T, E, G, ୗ , , F, େ , େ , େ ) be a syntactically correct C-BPMN and PN(C-BPMN) be its induced Petri net, then: PN(C-BPMN) is a WF-nets.
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
325
Proof. Follows directly from the construction of PN (C-BPMN). Figure 2 depicts the method how to transit a BPMN model into an Induced Petri net. Induced Petri net specifies the execution semantics of C-BPMN model. This allows us to identify those C-BPMNs which can be correctly executed. Definition 5 (Sound C-BPMN). Let C-BPMN = (O, T, E, G, ୗ , , F, େ , େ , େ ) be a syntactically correct C-BPMN and PN(C-BPMN) be its induced Petri net. CBPMN is sound iff PN(C-BPMN) is sound.
Figure 2. C-BPMN model and its Induced Petri net
5. Semantics of C-BPMN Process configuration In this section we can now discuss how a C-BPMN model corresponds to a concrete model and how process correctors can be preserved during the configuration of a CBPMN. The paper [5] defined configuration BPMN as follow: Definition 6 (Configuration of C-BPMN Process). Let C-BPMN = (O, T, E, G, ୗ , , , ୈ , , , F, େ , େ , େ ) be a C-BPMN. େ (אେ →{ON, OFF, OPT}) {େ →CT} is a configuration of C-BPMN if for each g אେ :
326
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
x x x
େ ሺሻ େ C(g), େ ={(n, n)פn אCT} ({ XOR,)ש,(ר, ({ })שଵ , ଶ ) פଵ אCTSרଶ {אXOR, ({}}שn,n) פn אCTS} If େ ሺሻאCTS and g א, then there exist an n אinput(g) such that େ ሺሻ = ୬ If େ ሺሻאCTS and gאୗ , then there exist an n אoutput(g) such that େ ሺሻ = ୬
The paper [5] also depicts some examples of configuration and valid configuration. The former section, in addition, gives the execution semantics of C-BPMN. Now we provide an algorithm to construct a concrete BPMN model based on a C-BPMN. Note that a C-BPMN defines a number of concrete BPMN models, and each valid configuration correspond a C-BPMN to a concrete BPMN. The function β maps a CBPMN and its configuration onto a concrete BPMN β(C-BPMN, େ ). Definition 7 (Semantics of Configurations). Let C-BPMN= (O, T, E, G, ୗ , , F, େ , େ , େ ) be a C-BPMN, and େ a configuration of C-BPMN. The corresponding BPMN β(C-BPMN, େ ) is constructed as follows: x
x x
x x x x
ଵ = (O, T, E, G, ଵ , ୗ , , ଵ ) with ଵ = {(g, l(g)) |gאG| େ }({g, ୡ (g)) |gאG} and ଵ = F|({(g, n) אୗ x out(g) |୬ᇲ אȈ ୡ ሺ
ሻ = ୬ᇲ רn ≠ ᇱ }({n, g) אin(g) x |୬ᇲ אȈ ୡ ሺ
ሻ = ୬ᇲ רn ≠ ᇱ }. ଶ = (O, ଶ , E, ଶ , ଶ , ୗ , , ଶ ), with ଶ = T |{t = OFF} and ଶ = {(ଵ , ଶ ) אF |{ଵ , ଶ }∩(Ȉ Ȉ) = Ф}. ଷ = (O, ଷ , E, ଷ , ଷ , ୗ , , ଷ ), for each t = OPT, with ଷ = ଶ } ୲{, ଷ = G ୲ {, ୲ }, ଷ = ଵ ୲ ({, XOR), (୲ , XOR)}, ଷ = {(ଵ , ଶ ) אଶ |f{בଵ , ଶ }} ୲ ({, f), ( ୲ , ୲ ), ( ୲ , ୲ ), (f, ୲ )} ({n, ୲ ) |(n, f) אଶ } ୲({, n) |(f, n) אଶ }. Remove all gateways with just one input and output arcs, ସ = (O, ଷ , E,
ସ , ଷ , ୗ, , ଷ ), with ସ = ଷ |{g אin(g) = 1ҏout(g) = 1} Re-apply Step 2 of the algorithm, i.e., try to remove the remaining functions labeled “୲ ”, ହ = (O, ସ , E, ସ , ଷ , ୗ , , ଷ ), with ସ = ଷ |{t = ୲ } Remove all nodes not on some path from a start event to a final event, Re-apply Step 4 of the algorithm, i.e., remove connector, ସ = (O, ସ , E,
ହ , ଷ , ୗ, , ଷ ), with ହ = ସ |{g אin(g) = 1ҏout(g) = 1}
The following Theorem shows that the resulting β(C-BPMN, େ ) is syntactically correct provided the initial C-BPMN is syntactically correct: Theorem 1 (β(C-BPMN, ܔ۱۰ ) ۼۻ۾is an BPMN). Let C-BPMN = (O, T, E, G, ୗ , , F, େ , େ , େ ) be a C-BPMN, and େ a configuration of C-BPMN. β(C-BPMN, େ ) is an BPMN satisfying all requirements stated in Definition 7 Proof. BPMN = (O, T, E, G, C, ୗ, , F) satisfies all requirements by definition. x x x
The sets E, T, G are disjoint Although not always stated explicitly we assume no name clashes, There is at least one event e אୗ , such that input(e) = 0. Start event are not removed, There is at least one event e א , such that output(e) = 0. End event are not removed,
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
x x
327
For each tאT: input(t) = 1 and output(t) = 1, For each gאG: input(g) ≥1 and output(g) ≥1. Existing gateways and newly added connectors satisfy this requirement.
In the former section, we discussed that a C-BPMN configuration can be represented by using the hiding and blocking operations that defined in C-WF nets [19]. Therefore, we can construct a C-BPMN configuration onto the Induced WF-nets. For example, if a configuration task in a C-BPMN switched OPT, this indicated that the transition in the WF-nets is option hide. If a configurable XOR gateways is restricted, the transition in the WF-nets is blocked. Definition 8 (Induced WF-net Configuration). Let C-BPMN = (O, T, E, G, ୗ , , F, େ , େ , େ ) be a syntactically correct C-BPMN, be one of its configuration େ and PN(C-BPMN) be its Induced Petri net. ిషాౌొ { → אallow, hide, block, option hide, option block} is the configuration of PN(C-BPMN) induced by . If we start from a C-BPMN that has been checked for soundness, and we apply a configuration step, we can check the correctness of the resulting C-BPMN by reasoning on the Induced WF-net before and after the configuration. Proposition 1(Soundness-preserving C-BPMN configuration). Let C-BPMN be a sound C-BPMN, େି be one of its configuration, PN(C-BPMN) be the Induced େ WF-net. Let also ిషాౌొ be the configuration of PN(C-BPMN) induced by େି େ and Ⱦ( כPN(C-BPMN), ిషాౌొ ) be the configured net which all the nodes not on a directed path from the input to the output place. If PN(β(C-BPMN, େ )) is equal to େ כ Ⱦ (PN(C-BPMN), ిషాౌొ ) then β(C-BPMN, େ ) is sound. Proof. It is obviously that: 1) C-BPMN is sound. Hence its configured BPMN β(C-BPMN, େ ) is syntactically correct(Theorem 1) and PN(C-BPMN) is sound(Definition 5). 2) Since β(C-BPMN, େ ) is syntactically correct, its Induced Petri net PN(β(C-BPMN, େ େ )) is a WF-net. Thus Ⱦ( כPN(C-BPMN), ిషాౌొ ) is sound, since it is the configured WF-net of PN(C-BPMN) which is sound. If PN(β(C-BPMN, େ )) is େ כ (PN(C-BPMN), ిషాౌొ ) then PN(β(C-BPMN, େ )) is sound. Hence equal to Ⱦ β(C-BPMN, େ ) is sound.
6. Case Study To show the configuration steps of configuration semantics, a C-BPMN model is introduced and analyzed. Table 3 shows nine configuration patterns [20] and seven patterns in the extension of BPMN. a configurable task of C-BPMN corresponds to a Optionality pattern, and the task can be configured as ON, OFF and OPT. configurable gateway in C-BPMN corresponds six patterns, including split gateway and join gateway. Figure 3 depicts a configurable model for configuration. The configuration aspects are denoted by double-line border, and we ignore the meaning of each element within the model. In examples, we have three components of configurable nodes: configurable task B, configurable gateway XOR and two configurable gateways XOR or AND connected with arcs, i.e. task A, E and F are configurable tasks and gateway ଵ ,
328
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
ଶ and ଷ are configurable gateways. We also ignore the configuration requirements (େ ) in the model. The configurable nodes in the figure may have: x x x x x x
The configurable task A has been switched OPT, The configurable task E remained ON, The configurable task F has been switched OFF, The configurable ଵ gateway has been configured to XOR, The configurable ଶ gateway has been configured to , The configurable ଷ gateway has been configured to AND.
Figure 3. Configuration semantics of a C-BPMN model Table 3. Configuration patterns in the C-BPMN Configurable Task Configurable Gateway(Split)
Configurable Gateway(Join)
Others
Optionality Parallel Split Exclusive Choice Multi Choice Synchronization Simple Merge Synchronizing Merge Interleaved Parallel Routing Sequence Inter-relationships
√ √ √ √ √ √ √
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
329
Now, we should follow the configuration semantics. Step 1, we choice all configurable gateways correspond to a concrete gateway, i.e. ଵ and ଷ remained before and ଶ have a decision on , so we delete the arcs form ଶ to ୈ . Then, we remain task E and replace task F by skip in Step 2. Next, we correspond task A. In Step 3, we add an XOR-split gateway, a skip task and a XORjoin gateway. Then we remove ଶ in Step 4 because of the gateway only have one input and one output. We delete task skip which come from task F in Step 5. Finally we remove all nodes not on some path from a start event to a final event in Step 6, and reapply Step 4 to prevent one input and one output gateways.
7. Conclusion and outlook The paper provides a whole solution of the configurable extension to BPMN, which focuses on the extension of its core entities and provides them executable semantics. Moreover , the correctness-preserving of C-BPMN is discussed according to its formal definition , the correctness preserving conditions of C-BPMN for syntax validation and the correctness-preserving constrains of C-BPMN model for semantics verification are separately proposed. Additionally, the paper introduces an individualization algorithm of C-BPMN based on process configuration semantics and provides an efficient and effective method to automatically customize C-BPMN model with accordance to specific requirements. At last, a running example with logic coverage of seven configuration patterns is taken to demonstrate how well the algorithm supports complex C-BPMN models and process configuration semantics. Based on the results above, there still some problems needed to be solved in the future: 1.
2.
3.
Developing the other solutions of the configurable extension to BPMN not only focusing on the core entities of BPMN but more on the other entities of control flow, for example, middle event, complex gateways etc. Up to now, there doesn’t exists a scientific approach to identify which entities are suitable to be set configurable and the method to implement the configurable modeling technique. Building C-BPMN modeling environment based on the existing results above.
References [1] H. Klaus, M. Rosemann, G.G. Gable, What is ERP? , Information systems frontiers, 2(2): (2000) 141-162. [2] M. Rosemann, ERP software: characteristics and consequences, 7th European Conference on Information Systems, 1999. [3] P. Fettke, P.Loos, Classification of reference models: a methodology and its application, Information Systems and e-Business Management, 1(1) (2003) 35-53. [4] M. Rosemann, Application Reference Models and Building Blocks for Management and Control, In Bernus et al. (eds.), Handbook on Enterprise Architecture. Springer Berlin Heidelberg, 2003: 595-615. [5] M. Rosemann, W. M. P. van der Aalst, A configurable reference modelling language, Information Systems, 32(1) (2007) 1-23. [6] W. M. P. van der Aalst, A. H. M. Ter Hofstede YAWL, Yet another workflow language, Information systems, 30(4) (2005) 245-275. [7] T. Murata, Petri nets: Properties, analysis and applications, Proceedings of the IEEE, 77(4) (1989) 541580. [8] G. Engels, A. Förster, R. Heckel et al., Process modeling using UML, Process-Aware Information Systems, (2005) 85-117.
330
H. Zhang et al. / Extending BPMN for Configurable Process Modeling
[9] Object Management Group. Business Process Modeling Notation (BPMN) Version2.0. OMG Final Adopted Specification. Object Management Group, 2011. [10] G. Keller, A.W. Scheer, M. Nüttgens, Semantische Prozeßmodellierung auf der Grundlage Ereignisgesteuerter Prozeßketten (EPK), Inst. für Wirtschaftsinformatik, 1992. [11] M. La Rosa, M. Dumas, A.H.M. ter Hofstede et al., Beyond control-flow: Extending business process configuration to resources and objects, Queensland University of Technology , 2007. [12] M. La Rosa, M. Dumas, A.H.M. ter Hofstede et al., Configurable multi-perspective business process models, Information Systems, 36(2) (2011) 313-340. [13] I. Reinhartz-Berger, P. Soffer, A. Sturm, Extending the adaptability of reference models, Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, , 40(5) (2010) 1045-1056. [14] F. Gottschalk, W.M.P. van der Aalst, M.H. Jansen-Vullers et al., Configurable workflow models, International Journal of Cooperative Information Systems, 17(02) (2008) 177-221. [15] HAN Wei-lun, ZHANG Hong-yan., Configurable Process Modeling Techniques for BPMN, Computer Integrated Manufacturing System, 19(8) (2013) 1928-1934. [16] F. Gottschalk, W.M.P. van der Aalst, M.H. Jansen-Vullers, Configurable process models — a foundational approach , Reference Modeling. Physica-Verlag HD, 2007: 59-77. [17] M. La Rosa, M. Dumas, A.H.M. ter Hofstede, Modelling business process variability for Design-Time Configuration. In J. Cardoso, W.M.P. van der Aalst (editors), Handbook of Research on Business Process Modeling, IDEA Group – Information Science Reference, 2009. [18] A.W. Scheer, ARIS - Business Process Frameworks, Springer, Berlin, 3rd edition, 1999. [19] W.M.P van der Aalst, M. Dumas, F. Gottschalk et al., Preserving correctness during business process model configuration, Formal Aspects of Computing, 22(3-4) (2010) 459-482. [20] A. Dreiling, M. Rosemann, W.M.P. van der Aalst et al., Model-driven process configuration of enterprise systems, Wirtschaftsinformatik 2005, Physica-Verlag HD, (2005) 687-706. [21] C. Ouyang, W.M.P. van der Aalst, M. Dumas et al., From business process models to process-oriented software systems: The BPMN to BPEL way, http://bpmcenter.org/wpcontent/uploads/reports/2006/BPM-06-27.pdf, 2006. [22] R.M. Dijkman, M. Dumas, C. Ouyang, Formal semantics and analysis of BPMN process models, Technical Report Preprint 7115, Queensland University of Technology, 2007. https://eprints.qut.edu.au/archive/00007115. .
Part V 3D Printing
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-333
333
3D Printing,a New DigitalManufacturing Mode a
Chagen HUa,1 and Guofu YINa School of Manufacturing Sci. & Eng., Sichuan University, Chengdu, China Abstract.In this paper, the concept of 3D printing technology was introduced,and its development history wasalso reviewed; Thenthe status-quo of 3D printing, at abroad and home, has beendeeply studied; The applications in different fields were elaborated; By analyzing the advantages and the technical barriers of 3D printing, the development tendency was put forward. Finally, the paper pointed out that 3D printing technology, regarded as a milestone in the industrial revolution in future, would bring about far-reaching effect and reformation in such aspects as modern manufacturing industry, social production mode and the way ofhuman lifeetc. Keywords.3D printing, additive manufacturing,manufacturing mode,status-quo, applications, development tendency
Introduction The digital age in manufacturing is giving rise to yield devices, which allow us rapidly to customize and change what we design, develop, fabricate and consume. 3D printing, also named as additive manufacturing (AM), has attracted extensive attention around the world. It has been considered as a disruptive technology by The Economist. In recent years, great progress has been made in this field. New methods and processes appeared constantly. Now some 3D printing devices can realize high resolution, full color, and multi-material printing [1-2]. Affordable 3D printers have been available to ordinary consumers at home, while some professional 3D printing services let everybody manufacture customized products with just a few clicks. Of course, what it brings us is far more than that. Some economists predicted that 3D printing would completely change current manufacturing mode and would cause the third industrial revolution ultimately [3-7]. 1. Concept of 3D printing Different from the conventional manufacturing methods, 3D printing, an emerging manufacturing technology, involves multidisciplinary knowledge, including material, computer software, mechanical engineering, automation and network information, etc. It is a process of manufacturing arbitrarily shaped objects by depositing material layer by layer on the basis of 3D digital model file [8]. Just as you can imagine an ink-jet printer that spits out droplets of ink on a piece of paper and creates a picture, a 3D printer can also spit out droplets of material and gradually builds up a 3D object, as shown in Figure 1. 1
Corresponding Author: Chagen Hu, Doctor Graduate, Email:
[email protected], Tel:+86-02815202864927.
334
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
UV lamp
support material
Figure 1.Working principle of 3D printer
Without traditional tools, fixtures and multi-procedure, 3D printing can quickly and accurately fabricate any complicated parts and realize real "freedom fabricating". It can solve many unimaginable technique puzzles in the manufacturing history, simplify manufacturing process greatly, shorten development cycle, and improve production efficiency. At present, the mainstream techniques of 3D printing include SLA(Stereo Lithography Apparatus), SLS(Selective Laser Sintering), FDM(Fused Deposition Modeling), 3DP(Three Dimension Printing), LOM(Laminated Object Manufacturing), PolyJet, EBM(Electron Beam Melting), etc [9-11].
2. History of 3D printing The history of 3D printing can be tracked back to the 19th century when the American made research on sculpture and landform technology. But it did not really begLn to develop and come into use until the late 1980s. In 1986, Charles Hull set up the first 3D printing company, 3D Systems, and developed a standard file format, STL. In 1988, the first industrial level 3D printer, SLA-250, based on the stereo lithography molding technique, was released by 3D Systems. At the same year, Scott Crump invented a new cheaper printing technique, FDM, and established another company, Stratasys. Also in 1989, C.R.Dechard, a researcher at University of Texas at Austin, invented a new process, SLS. Multi-materials, such as nylon, ceramics and metal, could be used in this 3D printer. In 1992, an industrial level 3D printer based on FDM appeared, which marked that FDM has stepped into a commercial stage. In 1991, Helisys invented the first LOMsystem; however, it was not widely used because of extremely limited material. In 1992, DMT Corp. released the first SLS printer. In 1993, the professor at MIT, Emanul Sachs, invented the 3DP technique, similar to the 2D ink-jet Printing. On the basis of 3DP, Z Corp. began to develop printer. In 1998, Optomec successfully developed the LENS (Laser Engineered Net Shaping). In 2001, Solido developed the first generation desktop 3D printer. In 2003, DMLS(Direct Metal Laser Sintering), based on SLS, came out, which used metal binders to replace plastic binders. In 2005, Z Corp launched the high precision full color printer, Spectrum Z510, which marked that 3D printing stepping into color age. In 2008, the first open source desktop printer was released, stirring up the universal wave of the 3D printing. At the same year, Object Geometries released Connex500TM, which could print different materials at the same time. In the following years, some other companies, such as MakerBot, Organovo, Kor Ecologic, Object, Formlabs, SolidConcept etc., yielded their own 3D printers on the basis of the techniques mentioned above.
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
335
3. Status-quo of 3D printing technology 3D printing has drawn high attention all over the world since it rose. News and reports can be seen on internet, in newspaper and media. In 2012, TIME ranked 3D printing as one of the fastest growing industries in the USA, while the Economist of the UK predicted that 3D printing, together with other digital production modes, would lead to a new industrial revolution [7]. With rapid development for years, 3D printing has already gained remarkable achievements in such aspects as print precision, print resolution, print speed, materials and colors type. 3.1. Overseas status-quo of 3D printing Research on 3D printing in US and other European countries began at an early stage. After ten years of exploration and development, great progresses have been made. The US has become the absolute leader in this field. Now some devices can print 0.01mm per layer with resolution of 600 dpi, some can even print at speed of 25mm/h. Full color printing has come into reality. In 3D printing industry, 3D System and Stratasys have occupied vast majority of market share, they are becoming a giant gradually by merging and technology integration. Recently, some developed countries have regarded it as a new strategic industry, formulating relevant strategy and investing vast funds to accelerate industrialization. On March 9, 2012, Barack Obama put forward the plan about the revitalization of American manufacturing industry and advised the Congress NNMI to regain its dominant role in manufacturing filed. It meant that product development in American would consume only half time and cost in future. The ultimate goal was to help more Americans go back to work and promote sustainable economic development. 3.2. Domestic status-quo of 3D printing In China, the study on 3D printing technology turned up in the early 1990s, research institutions mainly included Tsinghua University, Xi’an Jiaotong University, Huazhong University of Science and Technology, Beihang University and Beijing Longyuan etc. Their research areas varied but mainly focused on basic process and material. For examples, Xi’an Jiaotong University focused on SLS equipments and materials, while Beihang University on SLS equipment, South China University of Technology on SLM, Tsinghua University on EBM, Huazhong University of Science and Technology and Nanjing University of Aeronautics and Astronautics on SLS. Rich basic theory and critical techniques have been accumulated after years of research in china. Industrialization achievements have also been made, some of which have even reached the leading level in the world. For example, Beihang University, together with Northwestern Polytechnical University, had successfully solved the puzzles about laser forming process by adopting metal deposition forming in terms of the large scale crucial components, which used ultra high intensity steel or Ti alloy as materials. This technique had been applied to transport airplane, commercial airliners, etc. Another example was that a high precision desktop 3D printer, developed by the state key laboratory of management and control for complex systems of CASIA, had realized the shaping of photosensitive resin by using digital light processing, the least thickness was just 25mm [9].
336
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
The Chinese government has also attached great importance on 3D printing. It has been simultaneously recorded in both National science and technology support plan manufacturing 2014 annual projects options guide and National High-tech Research and Development Projects. To propel the step of industrialization and marketization, speed up the international communication and accelerate the combination with the previous manufacturing technology, China 3D Printing Technology Industry Alliance, hosted by Asian Manufacturing Association, was founded in October 2012, which marked that 3D printing was stepping into a rapid development stage in china. Compared with other developed countries, however, many aspects, such as print precision, print speed, print size and software support, still cannot meet the commercial demands. Some pivotal technique need to be further improved.
4. Applications in Different Manufacturing Fields 3D printing has penetrated into every aspect in daily life or product production. Many fields, such as industrial manufacturing, toy design, aerospace, biomedicine, military weapons, education, food, archaeological research, etc, are involved [12-13]. The main rangeV of applications are as follows. 4.1. Industrial manufacturing The emergence of 3D printing has made traditional manufacturing enter a new age. It will drastically change the way of product design and production procedures. Product prototypes no longer need to be made by hand, they will be replaced by the more efficient, accurate and cheaper digital technology. It was reported, as shown in the Figure 2, that some parts of One:1(an luxurious super roadster cars exhibited in Beijing, 2014) were produced by 3D printers, which shortened development cycle and reduced cost too.
Figure 2.Super roadster One:1 (picture from zol.com.cn).
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
337
4.2. Aerospace The application to Aerospace is one of the main objects in industrial fields. Now SLA can fabricate some metal parts directly. For example, the State Key Laboratory of Solidification Technology at Northwestern Polytechnical University has successfully solved the puzzles about large titanium alloy component and fabricated the central flange of C919 aircraft. Another corporation, Morris Technologies, adopted SLS to fabricate aircraft engine parts, shown in Figure 3, greatly lowering material waste.
Figure 3.3D model for motor (picture from snecma.com)
4.3. Biomedicine 3D printing has also profound influence on the biomedicine. It has opened up a new situation in biomedical field. According to the media reports, some research institutes have printed out such products as drugs, embryonic stem cells, organs and bones, etc, to be applied to related fields [14-15]. In early 2013, European doctors and engineers used a printed artificial jaw to replace the damaged bone, helping the patient recover successfully. At the same time, Germany researchers utilized this technique to fabricate artificial vessels with biological compatibility [13]. Figure 4 shown below is a picture of a Lower jawbone model produced by 3D printer.
Figure 4.Lower jawbone model (picture from xilongtoy.com)
338
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
4.4. Architectural engineering The significance of 3D printing is obvious for the urban planner and architect. It will completely change the way of making model we have used before. Without building solid architectural model with all kinds of foam materials, we only need to print out the ideal digital model with a 3D printer. For architects, 3D printing is not only a modelmaking tool. With the breakthrough of dimensions and material, printing a house in the future is not a fable. Enrico Dini, an Italian inventor, has developed a huge 3D printer that can print with sand, but its application is not optimistic because of the restrictions of dimension. Now, Dutch architects are setting out to print out the biggest house in the world, as shown in Figure 5, with a 3D printer named room-maker in three years.
Figure 5.The world's first 3D printing houses (picture from yokamen.cn)
4.5. Archaeology 3D printing can not only help us create the future, but also can help us reshape the past. It has become the most effective tool for archaeological researchers. As shown in the Figure 6, the skeleton model of Tyrannosaurus, was assembled by hundreds of bones, which were printed out by a desktop 3D printer produced by MakerBot. What an unimaginable thing it is! 3D printing makes us know more about the past, what had happed and what species had existed on the earth in ancient time. Something never seen before will no longer be mysterious.
Figure 6.Skeleton model of tyrannosaurus (picture from bbs.cnliti.com)
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
339
4.6. Food processing With the continuous emergence of new material and process innovation, it is possible to produce food with 3D printers. Now some foreign manufactures can print out such food as chocolate, cookies and cakes, as shown in Figure 7, they could be enjoyed directly.Can you image when getting off from work, you can enjoy a delicious meal printed by a 3D printers in a few minutes, what a happy thing!
Figure 7.Food printed by 3D printer (picture from zol.com.cn)
Except for the applications mentioned above, more products, including shoes, toothbrush, jewelry, cell phone, instruments, toys and other consumer goods as well as the education, can witness the significance of 3D printing technology.
5. Advantages and Existing Problems 5.1. Advantages Compared with the traditional manufacturing mode, 3D printing has many advantages. (1) Lower threshold Compared with other manufacturing techniques, everybody can manufacture what he wants by using 3D printer. Without mastering all kinds of complicated process and operating skills in the whole course, we only need to follow those steps: designs designing them in a computer, converting the complex process into digital files (STL format), and then sending the files to the 3D printer. Just as low threshold in personal computer, 3D printing will also lead to more and more popularization among ordinary people. (2) Individual customization With the characteristics of processing step by step and cumulative fabricating, manufacturing process is no more confined to some simple products. Individual customization is easy to realize, which means that people can get any objects with different shapes they want. What's more, it is much easier for designers to exert their imagination and devise various products through personal customization. Can you image what an excited thing it will be when you take a walk with a unique cellphone on the street. (3) Cost advantage
340
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
For the traditional manufacturing, the more complicated the product is, the higher outlay it will cost. But in terms of 3D printing, it does almost not increase production cost, which means that the cost of printing one part is equal to average cost of the batch. Experts has predicted that 3D printing will ultimately break the conventional pricing model, and change the way of calculating fabricating cost[12]. Meanwhile, a 3D printer can print different shape products, which can greatly save the cost of training and purchasing new equipments, breaking through the restriction on generating limited type products by one device. Besides, 3D printing no longer relies on some expensive and special equipment (e.g., machine tool, mould, etc.) during printing, which also saves development cost effectively. (4) Timeliness 3D printer can realize “on-demand” printing according to customer’s requirements, which means that anyone can get the product they want at any time and in any place as long as there have network and 3D printing devices, reaching a real sense of zero stock. At the same time, it can effectively reduce the cost of inventory and transportation and shorten development cycle. (5) Lower material waste and cleaner production In traditional manufacturing way, excess material needs to be removed (e.g. lathing, milling, grinding, drilling, etc.), the material utilization ratio is very low. What’s more, large amounts of solid waste and liquid waste will be generated during the procedure, which pollute the environment seriously. 3D printing, however, can basically avoid this phenomenon. Otherwise, the material utilization ratio is very high, almost more than 95%, some even reaching 100%. (6) No-assembly Usually㸪products are mainly assembled with components by workers or robot in traditional manufacturing procedure, the more the components are, the more time and cost they need. 3D printing can directly fabricate the components, or even the entire product with assembly relationship, which eliminates the assembly step and saves the cost of labor and transportation fee. 5.2. Technological barriers 3D printing, however, does not go smoothly. It faces many difficulties and challenges. Further study is necessary. (1) Material Material is becoming a major bottleneck in the process of development. At present, In terms of all sorts of material in the world, the types of consumables currently used in 3D printing are very limited, only including plastics, metal powders, plaster, nylon and photosensitive resin, etc., which constrains the wide application and popularization of 3D printing. Therefore, the material type plays a decisive role in 3D printing application and development. Although development cost has dropped effectively, the prices of consumables still remain high, some material (e.g., ABS, PLA, PVA etc.) is no less than a few hundred RMB per kilogram, some material, photosensitive resin for example, is even up to several thousand RMB or even ten thousands of RMB per kilogram. Developing new materials or expanding the range of material application, as well as reducing material cost, inevitably become one of the main goals in 3D printing. (2) Print quality
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
341
The quality of part directly affects the function of the product, so print quality is a fundamental guarantee of wide application. Due to the restriction of material and craft, at present, the precision, intensity, toughness of printed products can hardly meet the high requirement. The range of application mainly focuses on such fields as models, toys and experimental argument, far away from advanced fields like automotive, aerospace etc. Therefore, bettering print quality is also another important goal in the future. (3) Print speed Print speed is one of the most important influential factors in efficiency. It is affected by the size, solid or hollow, precision, material and structure of the product. For the same product, for example, print time varies from precision. High precision costs more time, while low precision less time. What’s More, for metal parts, print speed directly affects the internal crystal structure, thus it will certainly affect the intensity of parts. Hence, how to improve print speed is becoming one of the crucial technical difficulties.
6. Development tendency of 3D printing Although 3D printing is catching fire now, there are still many problems to be solved except for the several factors mentioned above. How will it develop? The paper holds the view that 3D printing will go on as follows. 6.1. Multi-material and multi-head print 3D printer used at the present time can only print with one material or some kinds of material with similar attribute; the spray-head counts of it are no more than eight. Under this kind of circumstance, developing a printer compatible with multi-material becomes necessary, which can greatly reduce the types of device and save space. Of course, the increase of material types will definitely cause the quantity change of sprayhead. Thus developing a device with multi-head is also essential. 6.2. Intellectualization At present, most of the 3D printing techniques adopt "blind" process, but it is very complicated. When abnormity occurs, the running system cannot identify immediately and adjust automatically. Without manual intervention, forming is hard to complete in the end. The paper regarded intelligent control imperative. It can justify technological parameters timely when abnormity occurs, which makes complicated process more simple, guaranteeing the popularization of 3D printing. 6.3. Popularization With the increasingly mature technique and continuous drop of cost, especially the appearance of desktop 3D printer, 3D printing is no longer exclusive in terms of certain field. It will become more and more popular whether in daily life or other application field, 3D printers can be seen everywhere. For the popularization of this technique and the particularity of production mode, 3D printing will ultimately change into a “social
342
C. Hu and G. Yin / 3D Printing, a New Digital Manufacturing Mode
fabricating” mode[13], which means that consumers can directly participate in the fabricating procedure, the production and consumption patterns of personalized, real time and economization will come into reality.
7. Conclusion In brief, the profound application of 3D printing has begun to revolutionize many aspects of our life, ranging from rapid manufacturing, product customization to biomedicine devices, just as discussed and demonstrated above. There is no doubt that 3D printing will change the existing industrial structure and generate a new commercial mode, which will bring about far-reaching influence on world economy. We should be aware that 3D printing not only has impact on traditional industries, but also can produce quantities of new industries and opportunities. In this way, this paper holds the view that 3D printing will ultimately bring people into a more wonderful world. We expect that day!
Acknowledgements This work was supported by "the Technology R&D Program of Sichuan Province㸪 China, (No. 2014GZX0001)".
References [1] http://www.3dsystems.com/. [2] http://www.stratasys.com.cn/. [3] A. McLay, Re-reengineering the dream: agility as competitive adaptability, Int. J. Agile Systems and Management, Vol. 7, No. 2, 2014, pp. 101–115. [4] Christopher Barnatt, 3D Printing: The Next Industrial Revolution, Explainingthefuture.com,2013. [5] Peter Marsh, The New Industrial Revolution, China CITIC Press, Beijing, 2013. [6] Jeremy Rifkin, The Third Industrial Revolution, China CITIC Press, Beijing, 2013. [7] Chris Anderson, Makers. The New Industrial Revolution, China CITIC Press, Beijing, 2013. [8] Leslie Mertz, New World of3-D Printing Offers“Completely NewWays of Thinking”, IEEE Pulse, (2013),12-14. [9] Huaiyu Wu, 3D Printing. Three-Dimensional creation via Intelligent Digitization, Publishing House of Electronics Industry, Beijing, 2014. [10] Isaac Budmen,Anthony Rotolo, The Book on 3D Printing, Createspace,2013. [11] Ganyun Wang, Xuan Wang, 3D Printing Technology, Huazhong university of science & technology press,Wuhan,2013. [12] H. Lipson and M.Kurman, Fabricated: The NewWorld of 3D Printing, John Wiley & Sons, 2013. [13] Shaohao Guo, Zhen lv, 3D printing, an new wave of changing the world, Tsinghua University Press, Beijing, 2013. [14] Mian Qin, Yaxiong Liu, Jiankang He, etc, Application of digital design and three-dimensional printing technique on individualized medicaltreatment, Chinese Journal of Reparative and Reconstructive Surgery ,28(2014), 377-382. [15] Weigang Wu, Qixin Zheng, XiaodongGuo, The Controlled-releasing Drug Implant based on the Three Dimensional Printing Technology: Fabrication and Properties of Drug Releasing in vivo, Journal of Wuhan University of Technology-Mater. Sci. Ed., 2009, 977-981.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-343
343
Combining 3D Printing and Electrospinning for the Fabrication of a Bioabsorbable Poly-p-dioxanone Stent a
Yuanyuan LIU a,1, Ke XIANG a, Yu LI a,b, Haiping CHEN a, Qingxi HU a Rapid Manufacturing Engineering Center, Shanghai University, Shanghai 200444, China b School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo454000, Henan, China Abstract. For a good radial strength and small volume, the metal stent is widely applied in treating vascular disease. However, because metal stents can form a fixed stenosis in the treated area of vascular disease, it is not appropriate for treating children who are growing up. Bioabsorbable vascular stent (BVS) is the ideal choice for children. Aiming at overcoming the defects in mechanical properties of existing vascular stents, a sliding-lock bioabsorbable poly-pdioxanone (PPDO) vascular stent, which is fabricated through combining 3D printing and electrospinning technology, is proposed herein. The results of experiments show that when the thickness of a stent is constant, the stent combined with electrospinning is obviously better than one fabricated through 3D printing. In addition, it is very favorable for the growth and proliferation of cells. This fabrication process lays the foundation for further applying the stent in treating vascular disease in children. Keywords. 3D printing, Absorbable vascular stent, Electrospinning, Mechanical property,Composite forming.
Introduction Cardiovascular disease is one of the leading causes of death worldwide [1]. Currently, due to it being minimally invasive and efficient, the intervention therapy has become a primary method for treating coronary heart disease [2]. Metal stents have better radial strength, and they have been widely used in clinics [3]. Although these materials have good mechanical properties and biocompatibility, the usually are not biodegradable. The implantation will form a fixed diameter of blood vessels, which is not suitable for children’s vascular growth and may affect the further development of blood vessels [4,5]. This situation often requires reoperation, which limits its application in pediatric patients. In addition, stenosis rate and thrombosis rate using metal stents is also high, [6]. Due to degradation, BVS can disappear gradually, and therefore can avoid the occurrence of the abovementioned phenomenon [7]. Therefore, BVS is ideal for pediatric patients.
1
Corresponding author E-mail:
[email protected].
344
Y. Liu et al. / Combining 3D Printing and Electrospinning
At present, the preparation methods of bioabsorbable polymer vascular stent use 3D printing, weaving, laser engraving, coating method and so on. 3D printing is a process for producing 3D solid objects of virtually any shape from a digital model using a layer-by-layer (LBL) process [8]. 3D printing has received much attention in the biomedical field in recent years for its potential many useful applications [8]. 3D printing technology for BVS fabrication provides new theoretical and technical support. The bioabsorbable polymeric material often cannot achieve the strengths of the metallic material such as toughness and elasticity. Currently BVS remains in the initial developing stage, mostly prepared using bioabsorbable polymeric materials. Using bioabsorbable polymers for preparing polymer stents results in the strength being far less than the metal stent. SUN Kun et al [9] designed a sliding-lock PPDO stent for children with congenital stenosis. This sliding-lock slide stent is produced through 3D printing (fused deposition modeling, FDM). B.Stepak et al [6] adopted laser engraving to prepare vascular stents. Ligang et al [10] reported a home-made PPDO monofilament line weft woven tubular scaffolds, next to treat intestinal stenosis lesions. As mentioned above, the stents were prepared with bioabsorbable polymer. Because of relative poor strength compared with the metal stent, the application in clinics is limited. Were the intensity of the stent increased, the thickness of the stent could be increased, leading to its large initial diameter, poor expansibility, inability to reach a satisfactory balance between mechanical properties and delivery of the stent result in the stent being unable to meet the children's congenital vascular narrow clinical needs. Electrospinning technique has important applications in biomedicine. Nanofibers have received considerable attention in the tissue engineering field because of their distinctive properties, including high surface-area-to-volume ratio, biomimicry of the structure and functions of extracellular matrix of human body tissues, etc[11]. In this study, in order to offset the defect of bioabsorbable polymer vascular stent prepared by 3D printing, nanofibers were added to its surface by electrospinning to increase the effect of toughness, which inhibits the proliferation of vascular endothelial abnormalities in the stents. 1. Test platform for combining 3D printing with electrospinning for the fabrication of BVS In order to increase the effect of combining two processes: in 3D printing and electrospinning, the proposed complex foming system includes a motion platform subsystem, multi-temperature field controlling subsystem, high voltage electric field controlling subsystem, Taylor cone monitoring subsystem, 3D printing feeding subsystem and electrospinning feeding subsystem, shown in Figure 1. Motion platform subsystem is mainly used for receiving nozzle forming material from the typical three-axis gantry mechanism with PMAC motion control according to a planned path route. Multi-temperature field control subsystem is mainly used for stabilizing the material temperature at the nozzle. And the appropriate temperature can ensure the material at the nozzle has a good liquidity, which is favorable for adopting 3D printing technology to prepare the stent. High-voltage electric field controlling subsystem can supply a high voltage power for composite forming electrospinning to meet the requirements of electrospinning
Y. Liu et al. / Combining 3D Printing and Electrospinning
345
forming processes. The voltage in this subsystem can be rapidly regulated over the range of 0 ~ 50 KV.
Figure 1. Components of 3D printing biological complex forming system.
The main function of the Taylor cone monitoring subsystem is for achieving realtime monitoring of the Taylor cone form, realizing the monitoring of the spinning process. This subsystem is equipped with high-speed CCD camera, and the processed information is sent to the control system for adjusting the process parameters. 3D printing feeding subsystem is mainly used for preparing the macro-part of stent. Electrospinning feeding subsystem is primarily used for preparing the micro-part of stent. 2. Test for combining 3D printing with electrospinning for the fabrication of BVS 2.1. Experimental methods The test platform, as shown in figure 2, was built by our team is based on the combination of 3D printing and electrospinning technology to prepare the BVS.
Figure 2. Test platform of biological 3D printing complex forming vascular stents.
346
Y. Liu et al. / Combining 3D Printing and Electrospinning
Experiment materials and process parameters are shown as follows: Material used for preparation of the stent is granular Poly-p-dioxanone (Germany Evonik Röhm AG); molecular formula: -(C4H6O3)n-. Poly-p-dioxanone(PPDO) is a kind of aliphatic polyester-polyether with excellent biodegradability, biocompatibility and bioabsorbability. Its application has already been approved by the FDA as the base body material of medical absorbable suture (trade name PDS), and it also has utilization potential in the fields of orthopedic fixation materials, drug carriers, etc[12]. In order to ensure that vascular stents have an uniform pore structure, forming process parameters are set via a pre-processing module of biological 3D printing complex forming system: h- layer height, and λ- scanning line pitch bewteen formed fibers. Pre-processing module generates a machine-made path that follows particular parameters set by the generation document holder. Based on the path documents obtained, in order to ensure the that the system can prepare satisfactory VBS, the system’s feeding speed is set to 2.5 mm/min, the velocity of the platform is 40 mm/min, and the nozzles diameter of 3D printing feeding subsystem is 0.51mm, through post-processing module of biological 3D printing complex forming system on the basis of previous experience. Experiment material of the composite electrospinning is a mixed solution of PVAchitosan. In this study, chitosan and PVA were blended. Meanwhile, acetic acid and water were used as solvents. Firstly, PVA (grades JP233, degree of polymerization 3500, alcoholysis degree of 88%, Kuraray Company of Japan, Ltd.) was dissolved in hot water with 8wt%. This solution was heated until boiling on a magnetic stirrer and stirred until it was completely dissolved. Secondly, chitosan (viscosity-average molecular weight Mη=112×105, degree of deacetylation 82.5%, Zhejiang Golden-Shell Biochemical Co., Ltd.) was dissolved in the solvent of 10% acetic acid solution. Finally, the PVA solution and chitosan solution were mixed with a volume ratio of 2:1 and stirred well. In order to obtain a high quality of electrospinning, distance from the nozzle to the receiving plate was set to 150 mm through the PC, the high-voltage DC power supply voltage was set to 15 KV through the micropump, and the solution was fed through a 26G needle with a feed rate of 20ul/min. 2.2. Fabrication of vascular stents Test platform of biological 3D printing complex forming vascular stents as shown in Figure 2 was used to fabricate the vascular stents. Briefly, PPDO was inserted into a stainless steel syringe and heated using an electric wire. When the polymer reached the molten phase, PPDO was extruded through a nozzle and deposited on a continuously moving platform controlled by a computer. The vascular stent was fabricated by depositing PPDO fibers along the predefined path (Figure 3). Finally,Type I stents were fabricated by only using the 3D printing technology. On this basis, the system automatically turns on the micropump and high voltage when the motion platform subsystem control on the receiving board moves below the electrospinning nozzle. Nanofibers were added onto the Type I stents’ surface by electrospinning technology. Type II stents were fabricated by this complex forming technology.
Y. Liu et al. / Combining 3D Printing and Electrospinning
347
Figure 3. Test Schematic illustration of a PPDO BVS with the sliding-lock structure (1.framework, 2.barbs, 3.lamellar mesh structure).
2.3. Experimental results In order to compare the differences in the mechanical properties of bioabsorbable PPDO stents prepared through only 3D printing technology and complex forming ]technology, two kinds of stents were prepared. These two kinds of stents have two macro layers. Figure 4(a) is a Type I stents that was made using only 3D printing technology. The length of the stent is 40mm and the width of the stent is 20mm. The width of the strent and the pores between the stent is approximately 0.8 mm. Figure 4(b) is a local-enlarged view of Type I stents. We can get a local-enlarged view of the stents through the image measuring instrument (Suzhou Yixin Photoelectric Technology Co. Ltd). Figure 5(a) depicts prepared Type II stents through complex forming technology. Figure 5(b) is a local-enlarged view of Type II stents. When comparing Type II stents with Type I stents, many nanofibers adhere to Type II stents.
Figure 4(a). Type I stents.
Figure 4(b). Type I stents.
348
Y. Liu et al. / Combining 3D Printing and Electrospinning
Figure 5(a). Type II stents.
Figure 5(b). Type II stents.
2.3.1. Tensile strength test Based on data from 10 samples of each group, tensile strength was measured for each group, and the average value was taken. Tensile strength of Type I stents is 16.5±1.3 MPa, Tensile strength of Type II stents is 17.8±1.6MPa. By contrast, we can clearly find that the tensile strength of Type II stents is a little bit greater than Type I stents. The tensile strength was defined as the maximum stress during the tensile test until fracture [13]. 2.3.2. Radial strength test Radial strength of the stent is the resistance of the stent towards radial outer pressure and it is one of the most important technical indicators of a stent. The experiment of the stents is conducted on a radial strength tester, RX550 (INSTRON company). The test chamber temperature is 37℃, the compression ratio is 50%, and the compression speed is 0.1mm/s. According to ISO13485 standard, when a stent is compressed to 88% of the diameter, the radial strength is the maximum value of the support stent. Based on data from the same 10 samples, the radial strength of Type I stents is 121±14.5 KPa and the radial strength of Type II stents is 124±13.6 KPa. Similarily to the tensile strength, the radical strength of Type II stents is a little bit greater than Type I stents; however, the thickness of the two kinds of stents have no significant differences.Test results on mechanical propensities of two groups of stents are shown in Table 1. Table 1. Test results on mechanical propensities of two groups of stents Stents type Type I stents Type II stents
Tensile strength MPa 16.5±1.3 17.8±1.6
Radial strength KPa 121±14.5 124±13.6
2.4. Experiment discussions The reasons why Type II stents have good mechanical strength is because of its application of 3D printing technology, attachment of nanofibers on the stent increase
Y. Liu et al. / Combining 3D Printing and Electrospinning
349
the overall effect of toughness. The literature [14] confirmed the orientation of nanofibers as a guiding role to cell growth. Electrospinning nanofibers can simulate the extracellular matrix environment [15]. According to studies, the best aperture of vascular endothelial cell growth is between 20-60μm. Advantages of the composite stents (Type II stents) are its conduciveness for cell growth, proliferation, and ability to speed up the repairing of vascular stenosis disease. 3. Conclusions BVS can overcome the shortcomings of metal stents including impossible degradation and poor biocompatibility, but the bioabsorbable polymeric material often encounters difficultly in achieving the strength, toughness, and elasticity of the metal material. In this paper, we found that the radial strength and tensile strength prepared by combining 3D printing and electrospinning technology improve much more than the Type I stents prepared only by 3D printing technology. In addition, the adhesion nanofiber Type II stents are conducive to cell growth and proliferation and the acceleration of repairing stenosis. Our next task is to repair blood vessels in animals. Acknowledgements This study is partly supported by National Natural Science Foundation of China (51375292), National Youth Foundation of China (51105239). In addition, this study is also financially supported by the Henan provincial key disciplines of Mechanical Manufacturing and its Automation(PMTE201305A). References [1] Yu Wei, Ying Ji, Lin-Lin Xiao, Quan-kui Lin, Jian-ping Xu, Ke-feng Ren, Jian Ji, Surface engineering of cardiovascular stent with endothelial cell selectivity for in vivo re-endothelialisation, Biomaterials 34 (2013), 2588-2599. [2] X. Wang, H. Q. Feng, W. W. Wang, R. M. Zhang, Y . L. Cheng, Research on biomechanics propeties for balloon-Expandable intracoronary stents, Chinese Journal of Biomedical Engineering (2013), 203–210. [3] NEF H M, Möllmann H, Weber M, Cobalt-chrome multi-link vision-stent implantation in diabetics and complex lesions: results from the DaVinci-Registry, Clin Res Cardiol 2011, 98 (11), 731–737. [4] Konig A ST, Rieber J, Influence of stent design and deployment technique on neointima formation and vascular remodeling, Z Kardiol 91 (2002), 98–102. [5] Ron Waksman MD, Update on Bioabsorbable Stents: From Bench to Clinical, Interven Cardiol 19 (2006), 414. [6] B. Stepak, A.J. Antonczak, M. Bartkowiak-Jowsa, J. Filipiak, C. Pezowicz, K.M. Abramski, Fabrication of a polymer-based biodegradable stent using a CO2 laser, Archives of civil and mechanical engineering 14 (2014), 317–326. [7] G Ghimire, J Spiro, R Kharbanda, Initial evidence for the return of coronary vasoreactivity following the absorption of bioabsorbable magnesium alloy coronary stents, EuroIntervention 4 (2009), 481–484. [8] Falguni Pati, Jin-Hyung Shim, Jung-Seob Lee, Dong-Woo Cho, 3D printing of cell-laden constructs for heterogeneous tissue regeneration, Society of Manufacturing Engineers 1 (2013), 49–53. [9] FENG Qimao, JIANG Wenbo, SUN Kun, Mechanical properties and in vivo performance of a novel sliding-lock bioabsorbable poly-p-dioxanone stent, Mater Sci: Mater Med 22 (2011), 2319–2327. [10] Gang Li, Ping Lan, Jiashen Li, Biodegradable weft-knitted intestinal stents: Fabrication and physical changes investigation in vitro degradation, Biomed Mater Res Part A, 2013 Apr 26. doi: 10.1002/jbm.a.34759.
350
Y. Liu et al. / Combining 3D Printing and Electrospinning
[11] Ho-Wang Tong, Xin Zhang, Min Wang, A new nanofiber fabrication technique based on coaxial electrospinning, Materials Letters 66 (2012), 257-260. [12] K. K. Yang, Y .Z. Wang, A recyclable and biodegrable polymer: Poly(p-Dioxanone), Materials China 30 (2011), 25–34. [13] Fengxuan Han, Xiaoling Jia, Dongdong Dai, Xiaoling Yang, Jin Zhao, Yunhui Zhao, Yubo Fan, Xiaoyan Yuan, Performance of a multilayered small-diameter vascular scaffold dual-loaded with VEGF and PDGF, Biomaterials 34 (2013), 7302-7313. [14] Zhang Qing, Feng Jie, Zheng Yi-xiong, Zhong Ming-qiang, Fabrication of a tubular vascular scaffold with circumferential microchannels to induce oriented growth of smooth muscle cells, Chinese Journal of Tissue Engineering Research 16 (2013), 5417-5422. [15] Huang C, Niu H, Wu C, Ke Q, Mo X, Lin T, Disc-electrospun cellulose acetate butyrate nanofibers show enhanced cellular growth performances, Biomed Mater Res Part A 101 (2013), 115–122.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-351
351
Optimization of Process Parameters for Biological 3D Printing Forming Based on BP Neural Network and Genetic Algorithm a
Zhenglong JIANGa, Yuanyuan LIUa,1 , Haiping CHENa and Qingxi HUa Rapid Manufacturing Engineering Center, Shanghai University, Shanghai 200444
Abstract. As one of rapid prototyping technologies, biological 3D printing forming process is used to prepare three-dimensional scaffolds for tissue engineering. Its complexity and unstability of its processing environment make it difficult to form three dimensional internal pore structure of bone scaffold. Thus, it is necessary to optimize the process parameters. In this paper, the orthogonal experiment is employed as Back Propagation(BP) neural network training sample to establish the nonlinear relationship between bone scaffold wire width and process parameters, then by optimizing the process parameters by Genetic Algorithm(GA), the optimal combination of the biological 3D printing forming process parameters is obtained. The forming experiment of bone scaffold’s results show that based on BP neural network and Genetic Algorithm(GA), biological 3D printing forming process parameters optimization method is feasible and can help to get good quality bone scaffold. Keywords. Biological 3D Printing Forming; Filament Width; BP Neural Network; Genetic Algorithm
Introduction 3D printing technology is a new manufacturing technology based on discrete/accumulation forming principle, and it is a new high technology industry full of vitality and huge market potential[1]. By computer’s control, this new digital molding technology can manufacture fast any complex shapes of 3D objects through the accurate 3D accumulation of material on the basis of the CAD model of the object. With the continuous development of 3D printing technology, especially in recent years, it is frequently applied in the preparation of biomedical. Biological 3D printing forming process can be easily prepared regularly and versed three-dimensional tissue engineering scaffolds internal pore structure. Due to the randomness of biological 3D printing forming process and unstable processing environment, the support has a poor quality. Therefore it is particularly important to study its process parameters. However, there are many existing researches of process parameter optimization at home and abroad, such as Liu et al[2] researched on LDM process, they predicted the bone scaffold wire width by improving BP neural network; Tang et al[3] optimized the BP neural network with genetic algorithm to 1
Corresponding Author: Yuanyuan Liu, Rapid Manufacturing Engineering Center, Shanghai University, Shanghai 200444, China; E-mail:
[email protected].
352
Z. Jiang et al. / Optimization of Process Parameters for Biological 3D Printing Forming
predict blood propolis concentrations; Pan et al[4] used BP neural network combined with genetic algorithm to predict the wind power output; Tian et al[5] used the BP neural network and genetic algorithm to accurately predict TIG weld dimension; Liu et al[6] optimized the parameter of 3Dtissue engineering scaffolds with rapid freezing forming technology, acquired the optimal parameters, verified it with gelatin-chitosan experiment, and succeeded in producing multiple bone scaffold. In short, at present, the research on parameter optimization method of the biological 3D printing forming process is imperfect, and the support quality is not very good. At the same time, the existing is only a single and simple method to optimize process parameters. According to the study above, this thesis uses orthogonal experiment as training samples of BP neural network, then introduces the Genetic Algorithm for global search optimization, obtains the optimal combination of Biological 3D printing forming, and verifies process parameter combination through the experiment, acquires a good quality support.
1. BP Neural Network 1.1. BP Neural Network Model Structure BP neural network (also called back propagation network) is a multilayer feed-forward neural network. And it is composed of input layer, hidden layer and output layer and uses the error back propagation algorithm to train the network[7]. The input information, in the forward propagation, transfers from the input layer and the hidden layer to output layer. If the output layer cannot get the desired output, it will change the direction of propagation to transfer error signal along the original path and modify the connection weights between nodes in each layer. The output value adjusts the network parameters repeatedly. As a result, error function reaches a minimal value. Three layers BP neural network model is shown in Figure 1. And X1,X2,̖Xn are the input of BP network model and Y1 is the output value. BP neural network is a nonlinear mapping from input to output, and its purpose is to find out the mapping relationship between X and Y; as shown in Figure 2, the BP neural network implementation process is composed of 4 parts[8-10]: input model, network training, network testing, and the output of the model. The network input neurons is 4 and the corresponding four parameters are platform movement speed L1, the extrusion speed L2, nozzle diameter L3 and fiber spacing L4. The network output layer is the bone scaffold filament width y. Between input layer and hidden layer, tangent S transfer function (tansig) is used, while between the hidden layer and output layer, linear transfer function (purelin) is adopted. There is no method to calculate the number of hidden layer nodes accurately, but based on experience, the following formula[11] can be referred in designing:
n
ni m D n
Here: n is the number of hidden layer nodes; i is the number of input node; number of output node; D is the constant between 1 ~ 10.
(1)
m is the
353
Z. Jiang et al. / Optimization of Process Parameters for Biological 3D Printing Forming
Initializes the network Parameter configuration
Input layer
Hidden layer
Output layer Input the training sample
X1
Strives for the hidden layer output and training sample
X2 Y1
Strives for the output layer output and the training sample YES Output the final weight
Xn
Deviation is less than the setting NO
End
Figure 1. BP neural network model
Cumulative value matrix
Figure 2. The process of BP neural network
1.2. The Selection of Training Samples and Testing Samples This experiment is based on Rapid Manufacturing Engineering Center of Shanghai University organization scaffolds experiment platform. Experimental materials are 20% gelatin and 4% sodium alginate mixed solution. In this article the sample value range of the process parameters is shown in table 1, BP neural network model of learning samples and testing samples are in the range of produce. Table 1. Process parameters value range factors
process parameters
value range
1
Platform movement speed (L1)/mm/s
16㨪23
2 3
Extrusion speed (L2)/mm/s Nozzle diameter (L3)/mm
14㨪22 0.4㨪1.0
4
Fiber spacing (L4)/mm
1.6㨪2.2
Orthogonal experimental design[12] features "uniform dispersion" and "neat comparison", is a kind of multi level, high efficiency, economical test method. In this paper, orthogonal design is used to determine the training sample points. Each factor selects four levels to form L16(45) orthogonal test table and training data table of the network model are shown in table 2. Table 2. The network model training data Number 1 2 3 4 5
L1 16 16 16 16 19
Input samples L2 L3 L4 14 0.4 1.6 17 0.6 1.8 19 0.8 2.0 22 1.0 2.2 14 0.6 2.0
Output samples y/mm 0.50 0.67 0.75 0.81 0.72
354
Z. Jiang et al. / Optimization of Process Parameters for Biological 3D Printing Forming
6 7 8 9 10 11 12 13 14 15 16
19 19 19 21 21 21 21 23 23 23 23
17 19 22 14 17 19 22 14 17 19 22
0.4 1.0 0.8 0.8 1.0 0.4 0.6 1.0 0.8 0.6 0.4
2.2 1.6 1.8 2.2 2.0 1.8 1.6 1.8 1.6 2.2 2.0
0.60 0.44 0.52 0.93 0.77 0.69 0.52 1.03 0.74 0.59 0.48
1.3. The Training of the Model In order to ensure the convergence, normalization of the sample data is processed. And the normalization formula is(2):
yi Here:
xi
ǃ
yi
xi xi min xi max xi min
are respectively the original data and specification of data;
(2)
x i max
x i min
ǃ
are respectively the maximum and the minimum in the samples. The data ranges between [0,1] after normalization. This paper uses the MATLAB software as a tool for neural network training. Different training function has great different training speed and precision. Since LM (Levenberg-Marquardt) algorithm is the calculation of the fastest and the most economical memory, it uses the LM algorithm to carry on the network training. Figure 3 presents LM error curve after neural network training. From the diagram, it can be seen that neural network has better performance and higher training efficiency. Randomly selected from four groups of experimental data in table 2 as the test sample to test the network, Figure 4 presents the simulation results of scaffold wire width after training the BP neural network. It can be seen that the wire width of the predicted value of the stent wire in good agreements with the actual values to achieve a high accuracy, thus we prove that the BP neural network prediction is reliable[13].
Figure 3. Error characteristic curve
Figure 4. The model simulation results
Z. Jiang et al. / Optimization of Process Parameters for Biological 3D Printing Forming
355
2. Genetic Algorithm For function of BP neural network is built through connecting weight and threshold value of neurons, it is difficult to optimize by traditional method. Genetic Algorithm is the optimal method by simulating the natural evolution process and conducting random and adaptive search [14]. It is mainly group search strategy and the information exchange among individuals. If identified in the BP neural network within the limits of the feasible region, Genetic Algorithm can find the optimal solution, reduce the search scope, shorten the calculation time of Genetic Algorithm, and satisfy the constraint conditions. Each individual codes into sign strings. The fitness function calculates the function value, and then function value simulates biological evolution process to select, crossover and mutate from generation to generation. Finally it gets the optimal solution. 2.1. The Objective Function and Fitness Function For biological 3D printing forming process, platform movement speed is L1, the extrusion speed L2, nozzle diameter L3 and fiber spacing L4. These are important parameters of bone scaffold, with bone scaffold wire width y as objective function. We establish the following equation: The objective function: Min(y)= f[(L1), ( L2), (L3), (L4)] Constraints: 16≤L1≤23; 14≤L2≤22; 0.4≤L3≤1.0; 1.6≤L4≤2.2 When the objective function is the minimum and achieves the optimization goal, a set of process parameters corresponds to the optimum parameters. 2.2. The Optimization Process (1) Coding method Through the operation of the individual coding, Genetic Algorithm can search high fitness individuals constantly, and increase the amount in the group gradually, then find out the optimal solution. Binary coding is a common way and its coding and decoding are simple. The genetic operation, such as selection, crossover and mutation, is easy to implement. Binary coding reflects the principle of minimum character set encoding[14]. (2) Genetic operator Genetic Algorithm has selection, crossover and mutation. The first step of selection operating is to calculate all the individual fitness of every generation group. According to the individual fitness, screening operation for some selection methods in the current generation populations, the selected individuals are used as members of the next generation populations, and the unselected are eliminated. The main purpose of the selection operation is to avoid gene deletion, improve the global convergence and computational efficiency [14]. Crossover operation refers to two pairs of individuals to exchange part of its genes by method, thus forming two new individual. Crossover operation is an important characteristic of Genetic Algorithm; it can generate new individuals, and then test the new point in the search space. Mutation is sub-individual variables change with small probability or step length. The changed probability or the step length is inversely proportional to the number of variables, regardless of the size of populations. Mutation operation makes Genetic Algorithm have random search capability in certain parts, and maintain the diversity of populations.
356
Z. Jiang et al. / Optimization of Process Parameters for Biological 3D Printing Forming
3. Optimization of Process Parameters Based on BP Neural Network and Genetic Algorithm 3.1. Optimize the System Learning algorithm of BP neural network has slow training speed, weak global search ability and trap in local minimum. However, Genetic Algorithm has strong global searching ability, the evolution process of each generation population is easy to get the global optimal solution. BP neural network and Genetic Algorithm combination establish a biological 3D printing forming process optimization system and it can optimize the process parameters comprehensively for achieving the optimal combination. Flowchart of genetic algorithm to optimize the BP neural network is shown in figure 5. Start
Create the initial population
BP neural network model
Mutation
Evaluation of the fitness function
Whether content optimization criterion
Crossover No Selection
Yes Output optimization results End
Figure 5. BP neural network combined with Genetic Algorithm optimization process
As optimizing process, each individual value of the variable group is provided by Genetic Algorithm as the input of BP neural network, through the training of BP neural network model output target value, based on target value, Genetic Algorithm calculates the fitness value for the individuals and determines the probability of individual generating to the next, and then conducts reproduction, crossover and mutation operation, thus get the next generation. Through iterative and evolutionary computation until it meets the convergence conditions, the optimal solution is obtained. 3.2. Optimization the Results The size of Genetic Algorithm group is 40. Crossover probability is 0.4 and mutation probability is 0.2. After 100 generations iterative optimization, the satisfactory optimization results have obtained, the optimal solution is: platform movement speed is 16mm/s. Material extrusion speed is 16mm/s. Nozzle diameter is 0.6mm/s. Fiber spacing is 1.8mm/s.
Z. Jiang et al. / Optimization of Process Parameters for Biological 3D Printing Forming
357
To verify the correctness of the optimization effect and the whole optimization process, after optimized, it gets process parameters combinations to simulation calculate the form process. Results of optimized wire width and the original wire width precision of the simulation are shown in table 3. The optimized of width precision improve significantly. Table 3 Comparison of calculated results Performance parameters Filament width precision /mm
Before optimization 0.58
After optimization 0.44
Thus it can be seen that using the BP neural network combined with Genetic Algorithm for biological 3D printing forming process parameters optimization research has obtained the good effect.
4. Experimental Verification Based on BP neural network and Genetic Algorithm optimization, it gets the optimal process parameters combination: platform movement speed is 16mm/s. material extrusion speed 16mm/s, nozzle diameter 0.6mm/s and fiber spacing 1.8mm/s. In order to prove the validity of the optimization, it uses experimental verification for the above-mentioned process parameters. Different colors of fluorescent agent in material are added to make the effect more apparent. As shown in figure figure6(a), preparation of bone scaffold is bad. So is the phenomenon such as the owe lap, accumulation, uneven of fiber spacing, porosity; As shown in figure6(b), since the movement platform is slow and extrusion speed is fast, the preparation of the scaffold fiber spacing is too dense and pore is too small. However, through optimizing of the algorithm, the preparation of bone scaffold is good, fiber spacing is uniform and porosity is better, as shown in figure6(c).
Figure 6(a). Uneven spacing of scaffold Figure 6(b). Small porosity of scaffold Figure 6(c).Better scaffold
5. Conclusions Biological 3D printing forming process can be easily prepared regularly and versed tissue engineering scaffolds structure, but the forming process is very complex. The influence of various factors is likely to cause uncertainty in the processing environment, and makes it hard to ensure shaped bracket with poor accuracy. In this paper, optimizing the BP neural network by Genetic Algorithm, it can obtain the optimal process parameters combination and a support with better quality. In the biological 3D printing forming process parameters optimization, the algorithm is feasible and
358
Z. Jiang et al. / Optimization of Process Parameters for Biological 3D Printing Forming
effective. Therefore it is of certain value to study the biological 3D printing forming process parameters optimization.
Acknowledgements This work was financially supported by the National Natural Science Foundation of China (No. 51375292) and the National Youth Foundation of China (No. 51105239).
References [1] Hod Lipson, Melba Kurman. 3D printing: from vision to reality[M]. Beijing: Published by Citic, 2013. [2] LIU Yuanyuan, HAN Zhengzhong, FANG Shuhui, etal. Bone scaffold forming filament width prediction of LDM based on the improved BP neural network[J]. Key Engineering Materials, 7 (2013), 187-192. [3] TANG Jingtian, CAO Yang, XIAO Jiaying, etal. Plasma concentration predication for propofol of optimized BP neural network based on genetic algorithm[J]. Science Technology and Engineering, 13 (2013):3552-3558. [4] Pan Xuetao, Qu Keqing. Wind Power Output Prediction with BP Neural Network Combining Genetic Algorithms[J]. Advanced Materials Research,2014:2526-2529. [5] TIAN Liang, LUO Yu, WANG Yang. Prediction model of TIG welding seam size based on BP neural network optimized by genetic algorithm[J]. Journal of Shanghai Jiao Tong University, 47 (2013):16901701. [6] LIU Dali, LIU Yuanyuan, JING Changjuan, etal. The Technological Parameter Optimization of Rapid Freeze Prototyping for 3D Tissue Scaffold Fabrication[J]. Advanced Science Letters. 88 (2012): 47-50. [7] GE Zhexue, SUN Zhiqiang. Neural network theory and MATLAB R2007 implementation[M].Beijing: Electronic industry press, 2007:269-300. [8] Zhang L M. Models and Applications of Artificial Neural Networks[M]. Shanghai: Fudan University Press, 1993. [9] Watrous R L. Learning Algorithms for Connectionist Network: Applied Gradient Methods of Nonlinear Optimization. New York: IEEE Press. 1990. [10] Zhou J H, Shen G L, Ding X L, Yang T. BP neural network in analysis of disease influential factors. Journal of Clinical Rehabilitative Tissue Engineering Research, 15 (2011), 1702-1705. [11] ZHOU Kaili, KANG Yaohong. Neural network model and MATLAB simulation program design[M]. Beijing: Tsinghua University Press,2005:89-91. [12] SHU Da, ZHAO Xuesong, SHUN Jichao, etal. Machining parameters of WEDM based on orthogonal experiments[J]. Journal of Anhui University of Technology and Science, 25 (2010), 38-41. [13] EI YIN,HUAJIE MAO,LIN HUA, et al. Back propagation neural network molding for warpage prediction and optimization of plastic products during injection molding[J].Material and Design, 32 (2011),:1844-1850. [14] ZHOU Ming, SUN Shudong. The principle and application of genetic algorithm[M]. Beijing: National Defence Industry Press,1999.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-359
359
Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold by 3D Bio-printing Technology Yanan ZHANG a, Yuanyuan LIUa,1, Haiping CHEN a, Zhenglong JIANG a, and Qingxi HUa a Rapid Manufacturing Engineering Center, Shanghai University, Shanghai 200444
Abstract. Bone scaffold with fully interconnected pores whose sizes are gradient structured can be prepared by 3-dimensional (3D) printing technology, and has presented important value in tissue engineering. It is known that, in the direction of bone thickness, the composition, structure and performance are continuously varied, and the number and species of related cells distributed in different regions are different. Based on biological 3D printing platform constructed adopting rotary pneumatic multi-nozzle structure and poly-L-lysine (PL) modified matrix materials, and in order to better simulate the spatial morphology of bone tissue and its function and make seed cells rapidly and accurately migrate to the specific region, a novel approach for preparing composite physical/chemical gradient bone scaffold was proposed. Dividing the scaffold into three different regions, the structure property and mechanical performance of different regions were detailedly analyzed. In addition, using adipose derived stem cells (ASCs) as seed cells, their initial adhesion on different regions and corresponding viability analysis were also conducted, respectively. The experiment results show that gradient material can make the cell migrate toward and attatch on specific location and optimizing its physical structure can improve its mechanical properties. Thus the composite gradient scaffold prepared through this approach presents great potential application for in vitro constructing complex tissues and organs. Keywords. 3D bio-printing; multi-nozzle; material modification; gradient bone scaffold; tissue engineering
Introduction With the development of molecular biology, medicine, material science and interdisciplinary sciences, bone tissue engineering provides new ideas for treating bone injury. In this approach seed cell and scaffold are two major elements [1,2], and scaffold serve as temporary carrier, and provide biological and physical cues to support cell adhesion, proliferation and induce differentiation of stem cells for mediating cell behavior including regeneration [3,4]. The continuous gradient variation of these cues in the scaffold such as porosity, mechanical strength and the concentration of bioactive molecules can greatly affect the biological characteristics of cells [5]. Therefore, the physical structure and biochemical properties of scaffolds are extremely important in tissue repairing and regeneration processes. 1
Corresponding author: Yuanyuan Liu. E-mail:
[email protected].
360
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
3D printing technology is a high-tech manufacturing technique whose process is based on material accumulation and compared with traditional techniques, is less affected by the complexity of physical prototype. Scaffolds with fully interconnected pores whose sizes are gradient structured can be produced by it, and its excellent permeability and mechanical properties can meet the requirement of tissue engineering scaffolds [6-8]. Sobral et al [9] realized the preparation of 3D physical gradient bone scaffold, and the gradient of pore size greatly reduced the loss of cells, thus having improved seeding efficiency. However, bone is a complex tissue with gradients, not only in the aperture, porosity and mechanical strength, it is also an ordered structure composed of a variety of cells. Physical gradient only cannot effectively guide the seed cells to migrate orderly and in a certain direction, thus not being able to make cells adhere to and grow in specific area. In this study, besides the physical gradient, another chemical gradient was proposed, namely the material gradient. Due to its structural similarities to the extracellular matrix (ECM) of living tissues, low toxicity and excellent biocompatibility, Sodium alginate is used as matrix material. However, its strong hydrophilic property is not conducive to the adsorption of protein and the initial adhesion of cells [10-13]. With good biocompatibility, Poly-L-lysine (PL) is widely used in bone repairing material. Lysine residues fixed on the surface of bone repair material carry positive charges, which can attract the negatively charged cells and thus improve the initial adhesion. In addition, PL can adjust the hydrophilic/hydrophobic balance of material surface by its amino group (-NH2), and promote cells adhesion by the interaction between functional groups and protein peptide chain on the cell surface [14-16]. Therefore, PL was used to modify the matrix material. Material gradient was formed in the scaffold between not-modified and modified matrix material. After receiving these gradient cues, seed cells will selectively migrate to the target region and then start growing and proliferation [17-19]. With inducement conducted in this way, formation of a bionic bone tissue in vitro can be possible. In order to obtain this composite gradient scaffold, in this research, the Biological 3D printing platform was developed. The rotary pneumatic multi-nozzle structure using compressed air as power was also constructed, which can realize accurate switching and assembly for different materials or cells. In addition, to explore the applications of this scaffold in tissue engineering, mechanical properties and porosity of the scaffold, cell initial adhesion and viability analysis were also conducted in this paper.
1. Materials and methods 1.1. Materials Sodium alginate (Sigma, UK) was used as matrix material while PL (Sigma, UK) as the modified material. The solution of Sodium alginate with weight fraction at 4% (w/v) in deionized water was placed in a shaker for 10 h at 120 rpm, referring as Material 1 (M1). Similarly, after preparing sodium alginate, a certain amount of PL was added to the alginate solution to modify the matrix material. The modified-material was referred as Material 2 (M2).
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
361
1.2. Experimental procedures and Scaffold fabricating The experiment was carried out on the Biological 3D printing platform (home-built equipment, Shanghai University, China), shown in Fig.1. It consisted of four main parts: a computer controlled three-axis motion mechanism, multi-nozzle structure, forming platform and variable management module. Pneumatic multi-nozzle structure using compressed air as power and circumferential arrangement, can be freely switched among four nozzles in the forming plane, presenting high degree of automation. The model of the composite gradient scaffold was shown in Fig.2, fabricated as 25× 25×15 mm3, alternating layers were oriented at 90° to each other. The scaffold was divided into three different regions (Region A, Region B, Region C), and there were two gradients among them. Region A and Region B had the same pore size, the fiber spacing (L1) was set to 500μm, but the material of Region A was M1, the material of Region B was M2. The fiber spacing of Region C (L2) was set to 300μm, and its material was M1. When preparing the scaffold, M1 and M2 were respectively injected into Syringe A and Syringe B, and extruded by compressed air. Process parameters were set as follows: the nozzle diameter was 0.5mm, the feeding speed 0.4mL/min, and the forming platform speed 12mm/s. Region C was printed firstly, after that, alternately printed Region A and Region B. During the forming process, it needed accurate switching between the two nozzles. After the preparation of the scaffold, put it into the freeze dryer for freeze drying.
Figure 1. Biological 3D printing platform
Figure 2. Schematic model of composite gradient scaffold
1.3. Scaffold characterization 1.3.1. Morphology of scaffold The morphological characterization of scaffold was analyzed by the scanning electron microscope (SEM, SU1510, Analysis and Testing Center, shanghai university). The samples were cut into 5×5×5 mm3 cubes, treated with glutaraldehyde (Sigma, USA),
362
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
desiccated with ethanol, vacuum-dried, and pre-coated with a conductive layer of sputtered gold. The micrographs were taken at different magnifications. 1.3.2. Porosity of scaffold In this study, the scaffold was divided into three different regions, so the porosity needed to be calculated respectively. The three regions were respectively sampled, each region taking five specimens. The values reported in the next section were the average of five specimens. Then the porosity (E) was measured by the Archimedes principle, calculated by the formula as follow:
E
V1 V3 u100% V2 V3
where V1 denotes the initial volume of ethanol; V2 the ethanol volume after a specimen has been immersed into; V3 the ethanol volume when the specimen is removed. 1.3.3 Mechanical testing In order to characterize the mechanical properties of scaffolds, compression tests were carried out on pressure testing machine (Instron5542, Canton, USA). Samples of the three regions were made into 5×5×2 mm3, and were placed on the pressure testing machine, carrying the static loading test in the vertical plane at a speed of 0.5mm/min until reaching the amount of 90% compression. An average compressive modulus was determined for all samples (n=5 per group). 1.4. Cell culture and cell morphology 1.4.1. Co-culture of cell-scaffold constructs in vitro Scaffold samples were fixed with glutaraldehyde, cleaned by deionized water, disinfected with 75% alcohol, and cleaned by phosphate-buffered saline (PBS) solution at last. In the sterile environment, the third passage ASCs (provided by Shanghai Tissue Engineering and Research Center) with good growth state were harvested by 0.25% Trypsin-EDTA (Life Technologies, NY), could finally be seeded into the samples which have been pretreated. The cell-scaffold constructs were co-cultured in an atmosphere of 5% CO2 at 37°C in Dulbecco’s Modified Eagle’s Medium (DMEM) with 10% fetal bovine serum (FBS) medium. Culture medium was changed every other day. 1.4.2. Fluorescence analysis Cells seeded on the scaffolds needed enough time to attach and adapt to the different scaffold architectures and materials, so these cell-scaffold constructs were taken out from the CO2 incubator after seeding for 24 h. All the specimens selected for fluorescence microscopy were fixed in 4% paraformaldehyde (Sigma, USA) solution in PBS for 1 h at 4°C. After washing by PBS, all specimens were labeled by the fluorescent reactive dye DIO for 10 min, then washed three times with PBS. These specimens were observed via Inverted fluorescence microscope (IFM, IX71).
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
363
1.4.3. Cell morphology and cell attachment The morphologies of the cells on different substrates was observed by SEM after seeding 72 h. Specimens were washed three times with PBS, fixed with 2.5% glutaraldehyde solution in PBS for 1 h at 4°C, sequentially dehydrated in increasing concentrations of ethanol (from 25, 50, 75, 80, 95 to 100%, each for 15 min), then vacuum-dried. The dried samples were coated with Au by a sputter coater and examined by SEM to determine the adhesion and ECM deposition of ASCs on the scaffold. 1.4.4. Cell proliferation assay For cell proliferation assay, ASCs were plated at a density of 3×10 3 cells/cm2 into the specimens. Cells at indicated time points (1, 3, 5, 7, 9, 11 days) were crushed and repeated freezing and thawing to release DNA. DNA quantification was performed (n=9 per group per time point) using Hoechst 33258 dye (Sigma-Aldrich) following the manufacturer’s protocol.
2. Results and discussion 2.1. Scaffold characterization As shown in Fig.3, the composite gradient scaffold was successfully produced. From the positive view of the scaffold (Fig.3a), the pore architecture of scaffold was fully interconnected and the whole structure was regular; from the side view (Fig.3b), the aperture difference, tight junction and natural transition between Layer 1 and Layer 2 can be obviously observed. Good bonding between the layers was essential for mechanical stability of the scaffolds. The fiber spacing of Layer 1 and Layer 2 was 293μm and 516μm respectively, measured by SEM. There was a slight deviation between actual value and the theoretical value, as shown in Table 1. In addition, due to the aperture gradient structure, there was an offset between Layer 1 and Layer 2, as shown in Fig.3d. Compared with scaffolds with uniform aperture, the gradient scaffold provided more tortuous conduits inside the scaffold, which could increase the residence time of cells in the scaffolds and increase the likelihood of contact between the cells and the surface of the scaffold, greatly improving the cell seeding efficiency [8].
Figure 3. The composite gradient scaffold.(a) scaffold positive view;(b) scaffold side view; (c) microstructure of pore size gradient;(d) the scattered arrangement between Layer 1 and Layer 2
364
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
Bone includes two typical forms: compact bone and spongy bone. Compact bone forms the extremely hard exterior while spongy bone fills the hollow interior. From the bone’s outer layer to the inner layer, the aperture is bigger and bigger, therefore, the porosity gradient is necessary to the preparation of bionic bone, and directly affects its mechanical properties. In the scaffold, Region A and Region B had the same pore size, their porosities measured by the Archimedes method were basically the same, about 68%, as shown in Table 1, while for Region C, the total porosity was 61%. As expected, for the whole gradient scaffold the total porosity values was intermediate between 68% and 61%, about 66%.The composite gradient scaffold met the porosity change of normal bone tissue from cortical bone to the spongy bone, and met the requirements of scaffold porosity for bone tissue engineered. Table 1.Parameters for each region of scaffold Scaffold regional
Aperture in theory(um)
Real aperture(um)
Porosity(%)
Region A
500
500s25
68s0.5
Region B
500
500s25
68s0.5
Region C
300
300s25
61s0.3
The whole scaffold
୍
୍
66s0.3
Bone scaffold with good mechanical properties can simulate the micro-stress environment of living organs, contribute to cell proliferation and differentiation in the scaffold. Fig.4 showed the stress and strain curves obtained for all specimens. For Region A and Region B, had the same aperture and different materials, compression capability was almost the same when having the same compression displacement. It indicated that the modification of matrix material had little effect on the mechanical properties, however, to a certain extent, the pore size and porosity of the scaffold determined the mechanical properties. As the stress-strain curves showed, Region C with lower porosity showed stronger compressive mechanical properties, indicating the expected relationship between porosity of the scaffolds and the respective mechanical properties.
Figure 4. Stress-strain curve for each region
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
365
Figure 5. Elastic modulus for each region
Fig.5 showed the elastic modulus of each region: Region C with the good compressive properties exhibited low elastic modulus, while Region A and Region B exhibited higher elastic modulus. The above results showed that: there were some great relationships among pore size, porosity and mechanical properties, therefore, reasonable selection of aperture parameters and rational deployment of the gradient structure can realize the preparation of bionic bone scaffolds which not only meet the strength requirements but also suitable for cell growth. 2.2. Biological performance To evaluate cell response to the material gradient of the scaffold, ASCs were plated at the same density in the samples and co-cultured. The cell adhesion on substrate could be regarded as the first step in culturing constructs for tissue regeneration [20]. For the studying of ASCs initial adhesion on different regions, the DIO-labeled cells were observed via IFM. As shown in Fig.6: Region B modified by PL was more attractive to ASCs than Region A. The quantity and distribution density of ASCs initial adhesion on Region B were significantly higher than those on Region A. This suggested that materials gradient, to a certain extent, instructed the ASCs to migrate to Region B, and then realized their initial adhesion and aggregation; the modification effect of PL on matrix material was obvious, significantly improved its biocompatibility. Cell morphology and the interaction between cells and scaffold were studied after having been co-cultured for three days in vitro. As shown in Fig.7, some ASCs adhered on Region A were relatively small and still kept spherical, without sufficient adhesion and spreading. However, cells on Region B showed a good attachment, with fully extended, the spreading area was larger and secreted more EMC. In addition, cells still kept fibroblast-like morphology and connected with each other by pseudopodia. The results showed that, PL could promote cell adhesion, cell spreading and cell growth. With the gradient change of the scaffold material, the growth state of cells was changing.
366
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
Figure 6. Initial adhesion of ASCs on different regions observed by IFM (a) DIO-labeled ASCs on Region A; (b) DIO-labeled ASCs on Region B
Figure 7. SEM of ASCs cultured on different regions for 3 days. (a) ASCs on Region A; (b) ASCs on Region B
To demonstrate the proliferation of cells, OD values were used to test the change of cells’ number. As shown in Fig.8, the cells’ number kept increasing after seeding, reached a peak at the 9th day, then gradually reached saturated and began to decrease afterward. Comparing the OD values of Region A with Region B at the first day, it showed that PL could promote the initial adhesion of ASCs. During days 3~7, cells in the logarithmic growth phase, the ASCs proliferation rate in the Region B was significantly higher than that in Region A. ASCs in different region had different proliferation rate, it was influenced by the properties of biomaterials, including composition, surface energy and electron charges. Therefore the method to the realization of cell directional migration and accumulation by modifying and setting the material gradient is feasible.
Figure 8. Proliferation of ASCs cultured on different region determined by DNA assay using Hoechst 33258
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
367
3. Conclusion In this work, preparing composite gradient scaffold including physical gradient and material gradient has been initially realized and improvement of the mechanical properties and some biological parameters can also be realized through optimizing scaffold architecture and the scaffold material. Therefore this method is feasible for preparing biomimetic bone scaffold, and can provide a new technique for constructing complicated tissues and organs in vitro. However, until now this research remain in the experimental stage, and the methods, materials and structure features used in the research are not the most ideal ones, and have much room for further improvement and perfection.
Acknowledgements This study is financially supported by National Natural Science Foundation of China (51375292), National Youth Foundation of China (51105239).
References [1] [2]
[3] [4]
[5]
[6] [7]
[8] [9]
[10]
[11]
[12] [13] [14]
Howard D, Buttery LD, Shakesheff KM, et al, Tissue engineering: strategies, stem cells and scaffolds, J. Anat. (2008) 66-72. Yu G, Ji J, Zhu H, et al. Poly(D,L-lactic acid)-block-(ligand-tethered poly(ethylene glycol)) copolymers as surface additives for promoting chondrocyte attachment and growth, J. Biomed Mater Res B Appl Biomater.(2006) 64-75. Higuchi A, Ling QD, Chang Y, et al. Physical cues of biomaterials guide stem cell differentiation fate, J. Chem Rev. (2013) 3297-3327. Fedorovich NE, Kuipers E, Gawlitta D, et al. Scaffold Porosity and Oxygenation of Printed Hydrogel Constructs Affect Functionality of Embedded Osteogenic Progenitors, J. Tissue Engineering: Part A (2011) 2473-2486. Brennan M Bailey, Lindsay N Nail, Melissa A Grunlan, et al. Continuous gradient scaffolds for rapid screening of cell–material interactions and interfacial tissue regeneration, Acta Biomaterialia (2013) 8254-8261. Zhang Jian-ming, Zhang Xi-zheng, LiRui-xin,et al, Preparation of tissue engineering scaffolds using rapid prototyping, J. Chinese Journal of Tissue Engineering Research (2013) 1435-1440. Andreas Pfister, Rüdiger Landers, Andres Laib, et al, Biofunctional rapid prototyping for tissueengineering applications: 3D bioplotting versus 3D printing, J. J Polym Sci Part A: Polym Chem (2004) 624-638. Susmita Bose and Sahar Vahabzadeh. Bone tissue engineering using 3D printing, J. Materials Today (2013) 496-504. Jorge M. Sobral, Sofia G. Caridade, Rui A. Sousa. Three-dimensional plotted scaffolds with controlled pore size gradients: Effect of scaffold geometry on mechanical performance and cell seeding efficiency, J. Acta Biomaterialia (2011) 1009-1018. SeungHyun Ahn, HyeongJin Lee, Lawrence J. Bonassar. Cells (MC3T3-E1)-Laden Alginate Scaffolds Fabricated by a Modified Solid-Freeform Fabrication Process Supplemented with an Aerosol Spraying, J. Biomacromolecules (2012) 2997-3003 Chih-Hui Yang, Keng-Shiang Huang, Chih-Yu Wang, et al, Microfluidic-assisted synthesis of hemispherical and discoidal chitosan microparticles at an oil/water interface, J. Electrophoresis (2012) 3173–3180. He Shulan, Yin Yuji, Zhang Min,et al, Research Advances on Sodium Alginate Hydrogels for Tissue Engineering, J. Chemical industry and engineering progress (2004) 1174-1177. Andrew Darling, Lauren Shor, Saif Khalil, et al, Multi-Material Scaffolds for Tissue Engineering, J. Macromol. Symp (2005) 345-355. GONG Haipeng, ZHONG Yinghui, GONG Yandao, et al, The influence of chitosan and polylysine related materials on nerve cells, J. Biological Physics (2000) 553-559.
368
Y. Zhang et al. / Preparation and Evaluation of Physical/Chemical Gradient Bone Scaffold
[15] Mao Xueli, Ling Junqi, Xiao Yin, et al. Modified poly (L-lactic acid)-poly (L-lysine) polymer induces bone marrow stromal cells initial adhesion, J. Journal of Clinical Rehabilitative Tissue Engineering Research (2011) 7100-7104. [16] Peng Ying, Tian Jing. Surface modification of bone repair material and osteogenesis, J. Int J orthop. (2012) 99-102. [17] Wu Jindan, Construction of Grafting Density Gradient Surfaces for the Manipulation of Cell Migration, Ph.D. Dissertation, Zhe Jiang University, 2012. [18] Oju Jeon, Daniel S Alt, Stephen W Linderman, et al. Biochemical and Physical Signal Gradients in Hydrogels to Control Stem Cell Behavior, J. Adv. Mater. (2013) 6366–6372. [19] Y. X. Wang, J. L. Robertson, W. B. Spillman et al, Effects of the chemical structure and the surface properties of polymeric biomaterials on their biocompatibility, J. Pharm Res (2004) 1362-1373. [20] K.Tsuchiya, G.P Chen, T Ushida, et al, Effects of cell adhesion molecules on adhesion of chondrocytes, ligament cells and mesenchymal stem cells, J. Mater.Sci.Eng: C, (2001) 79-82.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-369
369
Opportunities and Challenges of Industrial Design Brought by 3D Printing Technology a
Na QI a,b,1, Xun ZHANG a and Guofu YIN a,1 School of Manufacturing Science and Engineering, Sichuan University, Chengdu, Sichuan, 610065, China b Art of College, Xihua University, Chengdu, Sichuan, 610039, China
Abstract. 3D printing technology has been gradually used in many fields, such as industrial manufacturing and areas of life, therefore, it will bring great changes to the manufacturing and have a great influence on human's consumption patterns in the future. Based on the latest technological information, types and the status quo of 3D printing technology in the field of industrial design application have been summarized. What's more, a series of opportunities and challenges of industrial design have been predicted and particulrized according to the principle and developing trends of 3D printing technology. The purpose is that it could make industrial designers and relevant persons seize the opportunities brought by 3D printing technology and actively face the challenges at the same time, which will benefit to whole world. Keywords. 3D printing technology, 3D printer, industrial design, product design, opportunities and challenges
Introduction 3D printing is a kind of very popular technology in recent years. According to the industry report of 3D printing which was issued by Everbright Securities on Feb. 2013, 3D printing had become an emerging industry on a global scale and its output value had reached $1.68 billion in 2013 while it was estimated reaching $3.8 billion in 2015. [1] And according to the prediction of Analysys International, the market scale of 3D printing is expected to reach $16.2 billion in 2018. With lower cost, shorter printing time and more kinds of printing materials of 3D printing, it will become an important choice of most industry and consumers [2]. Figure 1 shows the printing 3D model. Many media and scholars have optimistically predicted that 3D printing may be the sunrise industry with broad prospects in the future and it will be integrated into thousands of people’s life. With the mature and popular of 3D printing technology, it will greatly change the process of traditional product design and products manufacturing. As an important part of the 3D printing industry chain, industrial design will be faced with a series of new opportunities and challenges.
1
Corresponding
[email protected]
Author:
Guofu
YIN,
Email:
[email protected];
Na
QI,
Email:
370
N. Qi et al. / Opportunities and Challenges of Industrial Design
Figure 1. The process of printing 3D model.㸦Picture source㸸 http://www.narkii.com/news/news_130603.shtml㸧
1. 3D printing technology 3D printing technology is also called rapid prototyping technology, material manufacturing technology, incremental manufacturing technology, etc., it can automatically printing subject one piece by one piece according to the computer data and using all kinds of adhesive materials. In fact, 3D printing is not a new technology. In the 1980s, it was born. However, because the technology was not mature, the cost was huge and the printing speed was low, it failed to be popularized at an early age. As the improvement of 3D printing technology, it has been widely used in aeronautics and space, transportation, health care, education, etc. 3D printing has high content of science and technology and it is a comprehensive application technology. It refers to many advanced technological knowledge such as digital modeling technology, information technology, mechanical and electrical control technology, materials and chemical science. So far, Stratasys, a famous professional 3D printing company, has developed 123 kinds of different materials for 3D printing. Printing precision can reach 0.01 mm on the thickness of single layer and the fine resolution can reach 600 dpi. When refers to printing speed, the most fast 3D printer is Photonic Professional GT, which was launched by Nanoscribe GmbH, a German company in 2013. It can print subject using polymer in a speed of more than 5 terabits per second. Figure 2 has shown the Photonic Professional GT 3D Printer and model plane printed by it.
Figure 2. Photonic Professional GT 3D Printer and model plane printed by it.㸦Picture source㸸http://gz.o.cn/shopping/news/187_5585_1.html㸧
N. Qi et al. / Opportunities and Challenges of Industrial Design
371
2. Application status of 3D printing technology in the field of industrial design 3D printing has been changed the supply chain structure of commodity and people’s way of life to some extent. In fact, it has been widely applied in the field of product prototype, mould manufacturing, jewelry, consumer electronics, furniture, toys and other areas of industrial design, as shown in table 1. Table 1. Mainly application of 3D printing in the field of industrial design Fields Industrial products
Tool manufacture Product design aspects Ergonomics research Daily consumer goods Culture creative products Electronic products Personalized customization
Application content Produce mold, components or products by print directly. Such as small cars, airplanes and household appliances have already been printed by 3D printers. It can print out kinds of complicated shapes with high efficiency, low cost and high accuracy and small quantities of custom parts. Many tools such as measuring instruments, jig, die-casting mould, fixture and so on that required in the enterprise manufacturing process can be printed rapidly with low price. The company no longer needs to spend more time and money to purchase and install it, and this can effectively improve the production assembly process. Product conceptual design, product reviews, prototyping, functional verification, etc. For instance, Microsoft has built a 3D printer workshop to print prototype so as to create better products. Through 3D scanning of human body, an accurate 3D model of human can be printed and will be suitable for research. The design and manufacture of toys, jewelry, clothing, footwear, DIY creative product. This is one of the most broad areas, whether a personality pen holder, or a unique ring owning with your lover , all of these could be printed. It has become the art expression vector of special material, shape, or complex creative works. For instance, Nokia has explored a toolkit for 3D printing technology amateurs. Using it, they can print their favorite mobile phone shell of Lumia 820 freely. 3D printing customized service based on e-commerce and 3D model data downloading based on network. Quirky, a creative consumer products company in New York , its turnover can reached $1 million a year by using the sales mode of ‘online collecting of consumers’ ideas – make product by 3D printer – sell through the electronic market’, and it could successfully launched 60 kinds of products every year, earning $1 million.
3D printing technology even may change the traditional way of product sales and use. In March 2014, Google has launched the Project of Ara modular cell phone components which will use the 3D printer to print, and is expected to sell together with customizable smartphones in early 2015. Project Ara allows user to configure the smartphone. It provides users with an empty framework of mobile phone, and user can
Figure 3. Google Project of Ara modular cell phone.㸦Picture source㸸http://www.narkii.com/news/news_130771.shtml㸧
372
N. Qi et al. / Opportunities and Challenges of Industrial Design
choose, replace, or even remove modular hardware according to his own needs, then user just needs to insert the selected modular hardware, as shown in figure 3. Consumers can even print picture which is designed by oneself in the module surface. It will not only need customers’ personality customization demand, but also can extend the cell phone use cycle and reduce e-waste as consumers can replace the single fault or obsolete parts easily [3] [4].
3. Opportunities of industrial design brought by 3D printing technology 3.1. Release the inspiration and liberate designers free from the traditional structure and craftwork technology In the traditional manufacturing process, a lot of good inspiration and creativity of industrial designers can only on the shelf because of the limitation of structure and craftwork. 3D printing technology constitute entity through layers of electrolytic deposition of materials, no longer need to make small parts to be assembled using such as casting, cutting, bending, etc. As a result, designers just need know computer control program and some compulsory requirement of 3D printing. Designers can realize complex shape thinking by 3D printer without having to understand the traditional thousands of different production structure and process. Different from traditional decreasing material method, 3D printing is an additional material manufacturing method, so industrial designers can have bolder attempt and innovation on product shape and structure. For example, the France designer Patrick had cooperation with Belgium company Materialise and he had designed and printed the Solid T1 side table and Solid C2 chair by SLS (laser sintering) and SLA (stereo lithography) 3D printing technology in 2004, as shown in figure 4, 5. Such a complex shape is virtually impossible using traditional processing and shaping process. In 2007, based on complex shapes, Patrick had tried creative exploration on the internal structure on the basis of 3D printing technology and had printed out the famous OneShot folding chair, as shown in figure 6. All joints of this chair were printed by 3D printer without using any component assembly [1].
Figure 4. Solid T1 desk
Figure 5. Solid C2chair
N. Qi et al. / Opportunities and Challenges of Industrial Design
373
Figure 6. OneShot ᣈਐࠣ
3.2. Make it easier and cheaper for customization Traditional product design based on mass production, its basic mode is to make the ever-changing consumer demand to adapt to limited style and size, and it has ignored the differences of users. And with the abundance of products, the personality of consumer demand is growing. Although some scholars and enterprises have put forward various ways such as mass customization mode to meet the personalized needs, however, as its manufacturing mode is based on batch production, it cannot fundamentally meet the user's personality needs. 3D printing embodies strong characteristics of personalization and intelligence, and its basic principle makes it possible for personalized design and manufacture. Users can customize their own personalized product by using 3D printer with easier ways and cheaper price. 3.3. Speed up the product development cycle and improve efficiency Time to market is one important factor consumer concerns which is also capital to corporate profits. Longer a project lasting in the design phase, later the product launching to market, lower the enterprise's potential profit will be. As a result, it is generally acknowledged that company should accelerate the time of product to market. Shorten transforming time from design drawing to 3D model is the only way to abbreviate product development cycle during the process of product development. Using the same amount of material to make goods, production efficiency of the current 3D printer is 3 times of traditional way, and with the introduction of 3D printing technology, it can make product prototype rapidly, speed up the iterative, optimize the design process, shorten the design development time and improve efficiency for the enterprise. 3.4. Reduce leakage risk of intellectual property right during manufacturing process Modern enterprises pay great attention on security and integrity of product. Although it is hardly happen to let out the product data after we give the digital documents to close
374
N. Qi et al. / Opportunities and Challenges of Industrial Design
manufactures, the leak risk will be basically eliminated if the enterprise itself has the 3D printing equipment, leak risk will be basically eliminated [5]. 3.5. Reduce the risk of new product development In principle, as long as you design the shape model on the computer, the real object could be print by 3D printers and you also can freely decide how many you want to print. Therefore, it could realize zero inventory and the company could produce according to the order. This is a very good news for the new start-up companies or investors, because the new product development costs and risks are both reduced. 3.6. The appearance of product parts’ 3D printing repair service mode In the field of industry, there has many parts which have high added value and worth to be repair. However, the traditional means of parts repairing like arc welding process always emerge the problem such as deformation or crack as to the defect of high heat input and lowaccuratee control ability. By melting material and shaping gradually through precise control of heat input, strength of product printed by 3D printer is much higher. Any broken or physical depreciation parts could be good repaired through 3D printing. Whatÿs more, small companies that cannot afford to buy manufacturing equipments can entrust to use 3D printing equipment manufacturing parts. And some companies are trying to launch new bussiness mode for 3D printing services.
4. Challenges of industrial design brought by 3D printing technology While 3D printing technology brings vigor and opportunities to industrial design, it also faces with problems and challenges: 4.1. The adjustment of industrial design service pattern The emergence of 3D printing technology will greatly may change the traditional mode of production. The cost of printing a single production or many will almost be the same and the advantage of mass production will disappear. The production mode in the future will change from intensive to distributed. Accordingly, service mode of industrial design must change to adapt to this new situation. 4.2. Re-definition and protection of intellectual property rights Object once can be described with digital files, it will be very easy to be copied and distributed, and pirate will also become increasingly rampant. The emergence of 3D printing technology makes it easier for imitator to launch new products in the market almost as quickly as innovators, coupled with the open source software and new mode of non-commercial, how to protect their intellectual property rights for enterprises and designers? Whatÿs more serious, just like the nuclear reactor can be used both for power generation or great damage, with the use of 3D scanning equipment, if 3D printer can
N. Qi et al. / Opportunities and Challenges of Industrial Design
375
duplicate and print anything, such as ATM card, key, guns and so on, how to maintain the social order in the future? Therefore, intellectual property rights related to design should be re-difined and protected. 4.3. Acute shortage of professional talents A great constraint of 3D printing industry is talent problem. Three kinds of professional talents will be needed in the future: x
x x
Managers with technological and administrative skills. As an advanced technology, 3d printing will cause transformations to current design and manufacture mode, it can not be done without contacts with high level managers, so such kind of employee is needed. CAD talents with 3D printing knowledge. Computer Aided Deisgn (CAD) is the foundation of 3d printing, only professionals can handle it, for the CAD for 3d printing is much more sophiscated. Professional talents working during 3D printing process. To accomplish 3d printing, a worker should know enough about structures and engineering. He also should have elaborate techniques, flexibilities, and several techniques such as polishing, firing, cutting and assembling. So it is very important to train more men for every position in 3d printing.
Educating and training enough professional talents of 3D printing industry chain as soon as possible is very important.
5. Conclusion 3D printing technology will affect almost every aspect in the field of industrial design in the future. Industrial designers should fully grasp the opportunities brought by it, while avoiding possible problems appropriately. With the help of 3D printing, designers can break through the cage of traditional product manufacturing process, stimulate their creativity and imagination, and change more good ideas into real goods, so as to meet the needs of consumers and give people a better life.
Acknowledgement This work was supported by science and technology support program of Sichuan Province (No. 2014GZX0001), Research Center of Industrial Design, Research Base of Humanities and Social Sciences, Sichuan Provincial Department of Education (No. GY-13YB-06), the Research Center of Qiang People, research base of Philosophy and Social Sciences of Sichuan Province (No. QXY1306).
376
N. Qi et al. / Opportunities and Challenges of Industrial Design
References [1] Zhang Ying, Print a 3D World : The Relationship Between the Development of 3D Printing and Industrial Design, NORTHERN ART 2(2013),86-87. [2] Unknown, over the next four years market for 3 d printing market will reach $16.2 billion, http://www.narkii.com/news/news_130603.shtml,2014.4.4. [3] Unknown, Google Project Ara 3D printing cell phone module will launch in early next year, http://www.narkii.com/news/news_130771.shtml,2014.4.4. [4] Baidu encyclopedia, Project Ara, http://baike.baidu.com/link?url=7yZ7HPc10zCHYQ5mHvqummr SKLSPWYDvgZ4vZdvdzgZZVMW3-lecGFrw6yee1u6AxORO5-2dSBLwNe1aGE4AJa, 2014.3. [5] Xu Tingtao, 3D printing technology, a new thinking of product design, Global IT 9(2012), 5-7. [6] Wang Xueying. Print a 3D World : The Relationship Between the Development of 3D Printing and Industrial Design , CHUANGXINKEJI 12( 2012), 14-15. [7] Zhang Nan, Li Fei. Influence of the development and application of 3D printing technology for the future product design, Jixie Shejji 7(2013), 97-99. [8] Qi Na, Yang Suixian, Design Method Based on Product Individuation, Mechanical design and manufacturing 2 (2010), 252-254. [9] Reade, L. 3D print: shaping the future, Chemistry & Industry 8 (2011),14-15. [10] Betts, B. Software reviews: 3D print services, Engineering & Technology 7 (2012), 96-97. [11] D.A. Roberson, D. Espalin, R.B. Wicker, 3D printer selection: A decision-making evaluation and ranking mode, Virtual and Physical Prototyping(2014), 201-212.
Part VI Design Methods
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-379
379
Design for Assembly in Series Production by Using Data Mining Methods Ralf KRETSCHMER a, Stefan RULHOFF b and Josip STJEPANDIĆ b,1 a Miele & Cie. KG, Germany b PROSTEP AG, Germany
Abstract. Decision making in early production planning phases is often based on vague expert knowledge due to lack of a reliable knowledge base. Virtual planning has been prevailed as a method used to evaluate risks and costs before the concrete realization of production processes. This paper introduces a new concept and the corresponding data model for Design for Assembly by using Data Mining (DM) methods in the field of series production. The approach adopts the usage of existing planning data in order to extrapolate assembly processes. Especially linked product and process data allow the innovative usage of Data Mining methods. The concept presents assistance potentials for development of new products variants along the product emergence process (PEP). With this approach an early cost estimation of assembly processes in series production can be achieved using innovative Data Mining methods as shown in an industrial use case. Furthermore, design and planning processes can be supported effectively. Keywords. Product Realization, Manufacturing, Digital Factory, Assembly, Process Planning, Data Mining
Introduction Increasing variability of products, shortened product lifecycles and corresponding complexity of processes as well as huge market fluctuations are the main challenges for modern manufacturers [1]. Production planning gains in importance and has to run as parallel as possible to the product development according to concurrent engineering principles [2]. In this early phase of the product creation, a first step is a cost calculation for the industrialization of the product in existing production lines regarding basic conditions [3]. The economic feasibility of series production must be assured with vague information on the product and given general conditions, e.g. shift model [4] where the cost-intensive assembly is a big challenge among others [5, 6]. The research and development project “Prospective Determination of Assembly Work Content in Digital Manufacturing (Pro Mondi)” was initiated to develop a concept using methods of data modeling and Data Mining (DM) to generate information with focus on the product assembly planning for new products in early production planning phases [7]. Aim of this project is the accurate estimation of the expected assembly work content and the resulting costs in an early stage of the product development as well as the additional support of the design process with assembly knowledge for the specific design. The approach contains the reuse of existing planning 1
Corresponding Author, Mail:
[email protected].
380
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
data in order to extrapolate assembly processes. Especially linked product and process data allow the innovative usage of DM methods. Facilitating such an interconnection of highly interdependent models and historical data requires the identification and assessment of the character of interdependencies between the models. As proof of concept this approach will be evaluated with different manufacturing companies. Data mining is a process of discovering valuable information from observational data sets which has been widely used in various areas such as business, medicine, science, and engineering [8, 9]. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. One of the greatest expected benefits of DM methods is the ability to link seemingly disparate disciplines, for the purpose of developing and testing hypotheses that cannot be approached within a single knowledge domain. Methods are reviewed by which analysts can navigate through different data resources (e.g. historical data] to create new, merged data sets [10]. Significant factors are efficient knowledge utilization and knowledge exchange on an interdisciplinary level. New processes suitable to assemble the given new product shall be designed based on this existing, historical data (product linked to corresponding process). Automatic analysis with a specific DM method shall be used to create a first draft of the assembly process and estimate the expected costs. Following production planning processes can be supported by automatic proposals of adequate assembly processes, which then can be customized [11]. Moreover, the design engineer can be supported at the selection of appropriate joining elements. With this approach, an assembly knowledge based support of the designer in series production can be achieved using innovative DM methods. This paper, as partial results achieved in this project, describes the innovative methods of PROSTEP AG facilitating the use cases of Miele & Cie KG, one of the leading manufacturers of domestic appliances. In previous report [7] authors have described the initial approach.
1. Business Requirements Available on five continents, Miele is the global premium brand of domestic appliances and commercial machines in the field of laundry care, dishwashing and disinfection. A continuous stream of innovations is the foundation of Miele's business success since 1899. In terms of quality, Miele appliances are considerably better than those of the competition, otherwise they would not have been able to compete successfully on such a fiercely competitive market. In order to address the challenges of data mining, the integration of various planning tasks within the product emergence process (PEP), new concepts are necessary. Though, as a part of integrated product and process development there are different definitions for various phases and aspects of planning activities along the PEP [12]. Regardless of the specific definition of these phases and aspects, however, based on the analysis it is certain that great amount of their containing information and knowledge are either utilized insufficiently and ineffectively or remain unused [13]. In this regard, the presented concept focuses on product design and production assembly planning [14]. Subsequently, for the product designer and production planer, there are varieties of applications which can assist the design or the planning process through information gathered by data mining [15].
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
381
1.1. Preparation and Requirements The proposed research and development approach is shown in Figure 1 and runs through the following steps:
Figure 1. Design optimization with additional time data
Enrich CAD data with assembling information: Derived from the similarly designed products in the past, assembly information such as time data for singular process steps about the actual design situation can be identified, extracted and provided in order to give support for the designer. This additional information can be used to enrich the CAD data to assist the current design and be updated in later assembly planning processes. Suggesting assembly connections: An assisting option for the designer is a suggestion list of similar previously constructed assembly connection variations. These lists give a quick overview of possible and already implemented connection types in the assembly. Assembly process estimation: The focus is on the creation of an assembly process for a new product. Based on existing product and process data, compilation of a first approximated assembly process for a new product could be developed. Subsequently, the production planner can specify further details and thus determine a first estimation regarding assembly time. Based on the assembly time and associated calculation scheme, the planner can perform the first cost estimation in a very early planning phase. The information in production planning and product development processes can mutually enrich each other and creates significant added value. The newly obtained information supports the entire workflow throughout the PEP. Therefore, as part of this concept, certain requirements need to be fulfilled. Therefore, the pre-conditions assigned with both systems as well as their respective processes have to be fulfilled [16].
382
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
1.2. Attributes and Data Sources Data Mining methods can be used for data clustering and classification, however criteria for comparison of data sets have to be identified [17]. Typical context of Data Mining procedure is shown in Figure 2 [18]. To determine these criteria, within the scope of ProMondi project, a survey of users as well as an analysis of various tools of the DM was performed. The objective of this analysis was to identify attributes relevant for assembly processes that could be assigned to products and parts in CAD [19], PDM and production planning systems. In CAD systems, attributes assigned to parts contain mainly geometric information including volume and weight. The PDM systems contain organizational information, such as creator, version and revision as well as the mentioned parts information form CAD [20]. In addition to the conventional systems for design and stacking product parts and assemblies, systems for process planning and time measurement were also taken into account. They sustain a comprehensive portfolio of information and therefore can be used to distinguish different product parts and assemblies. The results of this analysis are capsulated as an object oriented data model, further described in chapter 3.
Figure 2. Typical context of data mining procedure
The necessary enrichment of product and process data on the fly for the presented concept requires additional efforts in the design. This additional expenditure also relates to the assembly connections and includes the acquisition of new information form the designer’s know how. The designer usually defines assembly connections either implicitly through formed locked joints by the shaping of the parts or explicitly by connecting elements such as in screwed fasteners. 1.3. Data Collection and Availability The designer of the assembly connection considers all these information in the design but cannot store them in the CAD model because the CAD tools for the most part are not able to define the necessary attributes. To overcome this problem as part of the concept presented in this paper, the designer will be provided with an additional tool in the CAD system. It can be used to create assembly connections and gives additional information and explicit design
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
383
possibilities. These additional assembly informations are named below as “product assembly information”. Thus, data will be collected in the source system, the CAD system in particular. Since the defined objects are not part of PDM systems, an extension is necessary in order to implement connections as objects and to store them after the transfer in the PDM system persistently. In the further processing, the product information will be linked to the planning processes. Unless the storage of product data are in the same system as for production planning, the information flow from the PDM system to the planning system as well as the DM tool, for further analysis, has to be ensured. Basically, in current planning systems it is often possible to directly link processes to the corresponding products [21]. Therefore, an allocation of assembled product and the associated assembly processes can be realized. In the assembly, however, parts are joined with other parts or products. These assembly connections with their additional information have no digital equivalent object yet. However, by means of an object such as the “product assembly information”, it is possible to store useful additional connection information, which relates directly to the respective assembly connection. As part of this concept, the combination of the products and processes does not take place directly, but through this special, newly created object “product assembly information”. The linking of product and process does not necessarily need to occur at the part level.
2. Solution Concept The solution concept encompasses an assisting workflow to support the designer. As part of a new, modified or variant design, the designer generates new product data. In case of assembly connections, a software assistant shall support the designer on-the-fly with product assembly information for each connection (e.g. the torque screwed fasteners or the type and the form of a welded joint and information about other connection types). The designer can trigger an evaluation regarding the assembly connections in the model. For this purpose, the product assembly information of the CAD model are first prepared, analyzed with Data Mining, and compared with the existing product in the extended database. Then, the most similar product assembly information is determined from the existing products. This analysis can be restricted by a class of the connection types (screwed, weld, rivet) or deliberately left open to widen the solution space and to provide the designer with information about other assembly connections. A limitation on the particular type of connection yields as a result of the closest realized assembly connection of the same kind. The software assistant uses the product assembly information identified in the analysis of the PDM database to determine the respective associated and related sub processes. Therefore, the corresponding time information of the existing products and, if requested, an alternative proposal list are transferred in the CAD system and displayed. This assembly time information of the existing product represents a first approximated assembly time for the new product. So the designer is provided with this additional information regarding the assembly time and with an enterprise specific factor the corresponding cost of the current design solution. In the final step, the designer is able to optimize the product iteratively on the basis of anticipated assembly time and costs.
384
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
2.1. Data Model Based on determined assembly characteristics, a range of attributes is derived to classify the assembly of the parts. The connections between parts gained a particular importance. An overview of the generated data model for the data mining analysis is given in the previous report [11]. Further connection types can be added to the data model. To provide the required information for the time analysis, a standardized data model is applied. In this regard, ADiFa project’s “Application-specific data models”, so called ADiFa Application Protocols, were used, which offer the integration of processes and data for different DM systems [20]. Not all of these attributes can be identified in the CAD system. Some can be determined in production planning workshops in order to optimize the current design. Experienced designers and production planner can pre-allocate some parameters with estimated values which can be reviewed later. Other parameter and the corresponding value data can be extracted out of other systems e.g. the attributes of standard parts. 2.2. Data Mapping and Data Mining After aggregating and appending the data subsets from different sources, it is necessary to remove redundant data sets [21] for the DM process. The next step is converting and porting data in the presented data model. Depending on data source the conversion is either fully automated or partially automated with further manual adjustment. Value and scale of different attributes are often heterogeneous. In these cases, a normalization of ratings prevents the undesired high or low impact of certain attributes on the results and evaluation process. In this regard, a [0, 1] linear normalization has been used. A further attribute prioritizing via weighting can be necessary to define the importance of each attribute for the evaluation. An automated learning of the weights via machine learning methods depends on the existing data sets and their quality. Weights are determined based on expert knowledge or a combination of both methods. To prevent further expansion of scope and the complexity of existing problem, expert knowledge was applied to determine the attribute weights. It is possible to have more than a single weight vector. This approach is useful, if there are various object types or parts, which have different prioritization for their attributes [22]. To identify the objects with most similar product assembly information for a new object, the classification algorithm knearest neighbour (kNN) [23] with Euclidean distance as evaluation function is used. From the identified objects, a list is generated and the most related one can be manually chosen, which passes its assembly process data to new object. To assure the reliability of the presented method and prevent over fitting, a cross validation [24] is used. The implementation of the presented approach is challenging due to high requirement for interconnection and the overall quality of the existing data in different source system. In particular the pure number of realized and existing assembly connections and, thus, necessary instances of a product assembly information as well as the quality of the data regarding their attributes are important. The fulfillment of those high requirements has to be verified. Methods to improve the quality of the linkage of product data with the corresponding assembly processes will be evaluated. Is this task solved, the selection of the properties and attributes for the DM analysis has to be determined based on production data to ensure the reliability of generated results (Figure 3). In this scope a special focus is on the characteristics of the parts and of the connection itself. Utilization of the methodology is described as follows.
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
385
Figure 3. Formation of product clusters and process agglomerations
Suggesting assembly connections and enrichment of CAD data: Designer creates a new module with already known and new assembly connections in CAD system. He defines assembly connections and complements their properties in the context of the new module. Via the automated DM process, he is provided with information about the assembly connections. Moreover, for each assembly connection a list of alternative or ever realized connections can be created. Depending on the product properties, the five most similar product assembly informations are made available to the designer by a prepared proposal list, which is generated through cluster analysis of existing product data. These information can be used directly and enhance the CAD model. If the analysis is dispensed with the filtering of associated connections with the product assembly information, the designer can also be provided with other not associated connections as alternatives. Estimation of assembly process and information: The production planner drafts an initial assembly process for an assembly at an early stage of product development. For known assembly connections that are implemented in the new product as well as in the old product data, the right product assembly information and, thus, the assembly processes are found. For new unknown connections, the most similar product assembly information and related assembly processes from the database are determined and duplicated. Each of the founded product assembly information represents a single connection and the linked process represents precisely the assembly work content for this connection. The sum of the individual connections for the new product is its initial draft of first assembly process. The founded individual connections, the individual process, as well as the overall process can be used to assist the designer and the production planner. The planner and designer also get a first estimation for the expected assembly time and costs in the automated process. The production planner can increase the quality of the process by manual intervention. He adapts the product assembly information created by the designer before the DM analysis and can complete
386
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
the product assembly information in the attributes with practical knowledge. Thus, he has an impact on the input of the DM analysis and increases the quality of the result thereby. Furthermore, the designer has a first draft for the assembly process at one’s disposal and a first estimated assembly time in the current CAD system. By a company-specific factor, the designer receives also information about the cost of the connection in the assembly. By verifying this information, the designer can evaluate and compare the alternatives for different connections.
3. Use Case Evaluation 3.1. Building of Product Clusters Currently, several individual Use Cases are considered for validation purposes. First, the formation of the product cluster was considered. To estimate a suitable number for the product cluster structural data from part lists of assemblies were used. Then, the attributes such as size, material and weight which were shown previously were used in the first approach. In this case, the calculation of the component dimensions of the bounding box proved to be sufficiently accurate. This way, 5 product clusters with respective reference to the component category could be formed. With regard to the associated assembly processes, a first result was that components with similar design characteristics also need similar assembly processes. This seemingly trivial statement however validates an important requirement for the applicability of existing assembly processes for current products to future assembly processes for new-developed products. 3.2. Building of Process Clusters The clustering for the processes is based on time analysis in the MTM method (Methods Time Measurement). Basic principle is the determination of target times by combining the time measurement units. Depending on the containing time blocks, 7 process clusters with explicit reference to the component category were formed. Additional accuracy of clustering can be achieved with different similarity searches within the process data. The different similarity searches are used: 1. Similarity search via process parameter: Single time blocks such as „pick and place“ are composed of individual movements. Each single time block has attributes like „distance to pick“. These attributes are used in the data mining method K-Means to divide similar assembly processes into clusters. 2. Similarity search via description text: The similarity of description texted is evaluated based on the text mining. In particular, key words as “switch panel base“ or „steering“ obtain a high weighting. 3. Similarity search via sequences: Structural characteristics of individual assembly processes are considered. In addition, the question of how many same sequences of individual time blocks are used in a parent sequence is analyzed. The more identical or similar sequences of time blocks, the more similar the considered assembly processes. Figure 4 presents the automated similarity search with the tool “Rapid Miner“. In different process blocks, data are read, processed and analyzed with different similarity searches.
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
387
Figure 4. Similarity search for process data
4. Conclusions and Outlook Through the efficient design of assembly connections supported by Data Mining tools, the quality of planning results and planning processes can be increased, while simultaneously reduce time and cost. With this approach working schedules as planning results are based on field-tested assembly processes and contain the implicit knowledge used in similar assembly planning processes. The automatic generation of an adapted assembly process enables the fast customization to the concrete setting at the shop-floor. The presented approach contributes an important added value to design and production planning through usage of knowledge in the existing systems. Consequences are reduction of planning time, increasing availability of information as well as the better collaboration between design and production planning. The technical feasibility of the proposed solution has been shown by a prototypical implementation of the concept in CAD and PDM systems. New approaches with clustered data to improve data quality are in an assessment. Further development of tool sets and methods could help to reduce the high initial effort for adjustment of the data.
Acknowledgements The research project “Prospective Determination of Assembly Work Content in Digital Manufacturing (ProMondi)” is supported by the German Federal Ministry of Education and Research (BMBF) within the Framework Concept ”Research for Tomorrow’s Production” (funding number 02PJ1110) and managed by the Project Management Agency Karlsruhe (PTKA). Authors are responsible for the contents of this publication.
388
R. Kretschmer et al. / Design for Assembly in Series Production by Using Data Mining Methods
References [1] A. McLay, Re-reengineering the dream: agility as competitive adaptability, Int. J. Agile Systems and Management, Vol. 7, No. 2, 2014, pp. 101–115. [2] U. Bracht, T. Masurat, The Digital Factory between vision and reality, Computers in Industry 56, pp. 325-333, 2005. [3] H. Bley, C. Franke, Integration of Product Design and Assembly Planning in the Digital Factory, Annals of the CIRP, Vol. 53/1, pp. 25-30, 2004. [4] G. Boothroyd, Assembly Automation and Product Design, Second Edition, Taylor & Francis Group, Boca Raton, 2005. [5] B. Lotter, H.-P. Wiendahl, Montage in der industriellen Produktion, Ein Handbuch für die Praxis, 2. Auflage, Springer-Verlag Berlin-Heidelberg, 2013. [6] A. Bryan, J. Ko, S. J. Hu, Y. Koren, Co-Evolution of Product Families and Assembly Systems, Annals of the CIRP Vol. 56/1/2007, pp 41 - 44. [7] R. Kretschmer, S. Rulhoff, J. Stjepandić, Prospective Evaluation of Assembly Work Content and Costs in Series Production, in: Bil, C. (ed..): Proceedings of 20th ISPE International Conference on Concurrent Engineering (CE2013), Sep, 2 - 5 2013, Melbourne, Australia, IOS Press, Amsterdam, pp. 421- 430, 2013. [8] Y. Yin, I. Kaku, J. Tang, J. M. Zhu, Data Mining: Concepts, Methods and Applications in Management and Engineering Design, Springer-Verlag, London, 2011. [9] D. Talia, P. Trunfio, Service-Oriented Distributed Knowledge Discovery, CRC Press, Boca Raton, 2013. [10] W. W. Chu, Data Mining and Knowledge Discovery for Big Data. Methodologies, Challenge and Opportunities, Springer-Verlag, Berlin Heidelberg, 2014. [11] H. Al-Mubaid, E. S. Abouel Nasr, A.K. Kamrani, Using data mining in the manufacturing systems for CAD model analysis and classification. Int. J. Agile Systems and Management, Vol. 3, Nos. 1/2, 2008, pp. 147-162. [12] F. Demoly, A. Matsokis, D. Kirisits, A mereotopological product relationship description approach for assembly oriented design, Robotics and Computer-Integrated Manufacturing, 28 (2012), pp. 681–693. [13] O. Erohin, P. Kuhlang, J. Schallow, J. Deuse, Intelligent Utilisation of Digital Databases for Assembly Time Determination in Early Phases of Product Emergence, 45th CIRP Conference on Manufacturing Systems 2012, Vol. 3, pp. 424-429, 2012. [14] E. B.Magrab, S. K. Gupta, F. P. McCluskey, P. A. Sandborn, Integrated product and process design and development: the product realization process, second edition, Taylor & Francis, Boca Raton, 2010. [15] S. Rulhoff, R. Jalali Sousanabady, J. Deuse, C. Emmer, Concept and Data Model for Assembly Work Content Determination, Enabling Manufacturing Competitiveness and Economic Sustainability Proceedings of 5th CIRP Conference on Changeable, Agile, Reconfigurable and Virtual Production (CARV2013), Zäh, J. (ed.), München, Germany, Springer, Berlin, Heidelberg, New York, 2012, pp. 353-360. [16] J. Schallow, K. Magenheimer, J. Deuse, G. Reinhart, Application Protocols for Standardising of Processes and Data in Digital Manufacturing, in: ElMaraghy, H. A. (Hrsg.): Enabling Manufacturing Competitiveness and Economic Sustainability - Proceedings of 4th CIRP Conference on Changeable, Agile, Reconfigurable and Virtual Production (CARV2011), 2.-5. October 2011, Montreal, Canada, Springer, Berlin, Heidelberg, New York, pp. 648-653, 2011. [17] J. Han, M. Kamber, J. Pei, Data Mining: Concepts and Techniques, third edition, Morgan Kaufmann Publishers, Waltham, 2012. [18] D. Nettleton, Commercial Data Mining: Processing, Analysis and Modeling for Predictive Analytics Projects, Morgan Kaufmann, Waltham, 2014. [19] J. Hartung, J. Schallow; S. Rulhoff, Moderne Produktionsplanung - Integration in der Produktentstehung, ProduktDaten Journal 19 1, pp. 20-21, 2012. [20] D. Petzelt, J. Schallow, J. Deuse, S. Rulhoff, Anwendungsspezifische Datenmodelle in der Digitalen Fabrik, in: ProduktDaten Journal 16 1, pp. 45-48, 2009. [21] L. Ohno-Machado, H. S. Fraser HS; A. Øhrn, Improving Machine Learning Performance by Removing Redundant Cases in Medical Data Sets, AMIA Fall Symposium, pp. 523-527, 1998. [22] D. Zhang, P. L. Yu, P. Z. Wang, State-dependent weights in multicriteria value functions, Journal of Optimization Theory and Applications, Vol.74, No.1, pp. 1-21, 1992. [23] S. Dhanabal, S. Chandramathi, Review of various k-Nearest Neighbor Query Processing Techniques, International Journal of Computer Applications Vol. 31, No.7, 2011 [24] R. Kohavi, A study of cross-validation and bootstrap for accuracy estimation and model selection, in: 14th international joint conference on Artificial intelligence, Vol. 2 (IJCAI'95), Morgan Kaufmann Publishers Inc., San Francisco, pp. 1137-1143, 1995.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-389
389
The Analysis of Axial Slippage of the Sleeve in Circuit Breaker Operating Mechanism Wu SHIJING1, Zhang HAIBO, Zhang ZENGLEI and Zhao WENQIANG Wuhan University
Abstract. The circuit breaker plays a key role in the protection and controlling of high voltage transmission and distribution network. The operating mechanism is one of the most important execution units and its reliability level directly relates to the security of grid. As a result of repeating operation in practice, plastic deformation occurs in the sleeve of circuit breaker operating mechanism, and even the sleeve slips axially and falls from institutions, which seriously reduces the reliability of the circuit breaker operating mechanism. In view of the lack of current research on the phenomenon of sliping of the sleeve in circuit breaker operating mechanism, the problem of axial slip of the sleeve has been studied and the kinematic model of circuit breaker operating mechanism has been established in this paper. The cause of axial slippage of the sleeve has been analyzed through ABAQUS finite element simulation platform. Finally, part of parameters at the hinge are optimized using Taguchi method to achieve the goal of reducing axial slippage of the sleeve. Keywords. circuit breaker, sleeve, slip, abaqus
Introduction Good analysis and solutions of failure problems generated during operation in circuit breaker are given by some of the current domestic and foreign articles. However, the shaft sleeve slippage phenomena after repeated use have not been given enough attention and solutions. Obviously, the dynamic performance of the whole circuit breaker is harmful because of the shaft sleeve slippage. The organization dynamic response will be affected by the cumulative amount of slippage after repeated use of circuit breaker if the shaft sleeve slippage is not well eliminated. The abrasion between the shaft and parts will be increased and even the cumulative amount of slippage will cause the shaft sleeve stripped off the shaft eventually. In order to solve the problems of the shaft sleeve slippage phenomena in the High voltage circuit breakers, the size of shafting parts is optimized. In traditional analysis methods for high voltage circuit breakers mechanism, firstly the kinematic equations of components are listed, and then solved in numerical method by using computer software. Finally, the moving process of components in animation is displayed. However, the analysis methods still have many shortcomings for multi kinematic pairs system such as high voltage circuit breakers. Cui Yanbin etc. [1] observed the dynamic 1
Corresponding Author.
390
W. Shijing et al. / The Analysis of Axial Slippage of the Sleeve
response of high voltage circuit breaker actuator in the ADAMS kinematics module, and verified that the design parameters of the high-voltage circuit breakers met the requirements, but he didn't optimize the structural design of the single component, as for there is no mechanical analysis for corresponding component ; Yang Tao etc. [2] created a actuator motor control system simulation model of high voltage circuit breaker by using MARTLAB / Simlink and combined it with the hardware test equipment. This method made the entire control of high voltage circuit breakers trip realized, also the accuracy is relatively high. As a result of the method based on corresponding accurate theoretical algorithm and laboratory equipment, it is not applicable to the simulation of no theoretical reference; Guoqing [3] established kinematic equations for each component of four institutions according to Newton's second law and described the collision contact of motion pair with clearance by applying linear spring force and nonlinear damping, then the dynamics of components could be obtained combined with Matlab numerical simulation equation. Although this method can be relatively close to the actual dynamic characteristics of components, solving of Mathematical equations modeling becomes more complicated as the freedom and kinematic pair elements of model increasing. So for Multi-Kinematic pairs like Circuit Breaker, modeling and calculation would be very tedious; Tian Zhongwang [4] got the load data of insurance component by the sensor field test and imported load data into Ansys / LS-DYNA to do the finite element analysis for the component and the dynamic response of component under real load conditions is obtained. Nevertheless, even though we got a component of mechanism through experimental means, exited measurement error and other error factors are still exited. In this paper, for such a multi-vice campaign breaker system, we add the threedimensional model of axis and axle sleeve for each rotary deputy so that force transmission at rotary deputy is realized in the contact between sleeve and parts. With powerful computing capability of ABAQUS explicit dynamics simulation, the main factor affecting the sleeve slippage is found. Finally by using Taguchi method, we achieved optimal results with a minimum number of tests.
1. Model building 1.1. The description of the problem For the circuit breaker operating mechanism including of a number of rotary motion and mobile motion pair, in order to reduce surface abrasion of shaft and improve the shaft service life, the use of protective shaft sleeve play an indirect role. As a result of repeated use in practice and no axial geometric constraints in rotary motion pair, the shaft sleeve located in the sleeve rotation pair will slip axially and even stripped off the shaft [5]. In this paper, the circuit breaker operating mechanism of the 3 d model employed from 1.1 million v high voltage circuit breaker is researched. By using the 3 d modeling software Pro/Engineer [6], organization of circuit breaker assembly three-dimensional entity model is built, then imported into the ABAQUS for dynamic modeling and analyzed by the intermediate format [7]. In this paper, the structure of high voltage circuit breaker is shown in Figure 1. Among them, all kinematic pairs are built by the cooperation of solid shaft and hole, and all kinematic pairs between the components are constrained by geometrical shape.
W. Shijing et al. / The Analysis of Axial Slippage of the Sleeve
391
The maximum clearance of connection between shaft and hole is 0.04 mm. In this way, the force between components is transmitted through the parts contacting with each other. It is consistent with the actual situation, which greatly improving the credibility of the results.
Figure 1. dimensional model diagram of breaker.
1.2. Establishment of institutional dynamics model The force of shaft sleeve is analyzed for studying shaft sleeve motion. The stress distribution of shaft sleeve changes with the motion state of the component. Therefore, before the dynamics analysis of shaft sleeve, circuit breaker operating mechanism is set as the research object. In this paper, the dynamic response of the system is researched by using the general dynamics equation — Lagrange equation and combining with the appropriate boundary conditions. The freedom degree of circuit breaker operating mechanism is 1 and descartes rectangular coordinate system is chosen as the generalized coordinates. The general form of Lagrange equation is:
d wT wT x dt w q wqD D
QD
(1)
㸦 D =2㸪there are two generalized coordinates㸧 The kinetic energy of the system can be expressed as below: n
T
1
¦(2 m i 1
v
2 ci ci
1 J i Zi2 ) 2
㸦n is the sum of components㸧 The generalized force of the system can be expressed as below:
(2)
392
W. Shijing et al. / The Analysis of Axial Slippage of the Sleeve
QD
o
wr Fi x i ¦ wqD i 1 n
o
(3)
o
o
㸦 ri is the radius vector relative to the origin ,and ri ( x, y ) 㸧 T is the system kinetic energy and Q is the generalized force respectively.With T and Q brought into the Lagrange equation with the general form, you can get a set of variables x, y are coupled first-order partial differential equations group: o
d wT wT dt w xx wx
wr Fi x i ¦ wx i 1
d wT wT dt w yx wy
wr Fi x i ¦ wy i 1
n
n
o
o
o
n
¦F
ix
i 1
(4)
n
¦F
iy
(5)
i 1
㸦n is the sum of applied forces except for torques㸧 As a result of the variables x and y couple with each other, it is hard to resolve the equations group, so we can get the displacement expression of generalized coordinates x and y by making a assumption for the result and decoupling the equations, which can be expressed as x(t) and y(t). By using ABAQUS, a software good at solving dynamic equation, the partial differential equations with coupled variables are solved and the motion state of the mechanism is obtained. The stress contours of the mechanism is calculated with ABAQUS so that the stress distribution is observed and the reason of slippage is analyzed. 1.3. Establishing dynamic model in ABAQUS After importing the three-dimensional model of the high voltage circuit breakers into ABAQUS, the material property is defined for every part. The modulus and Poisson ratio of crank arms ࠊ links and other important parts are 2.06e11 and 0.28. The modulus and Poisson ratio of shaft and shaft sleeve are 1.9e11 and 0.3. The modulus and Poisson ratio of the insulated rods are 1.6e11 and 0.3. An explicit dynamic analysis step is defined and the step time as 0.1 second. In the interaction module contact settings is added. Firstly, a interaction property is set, with using penalty function as friction algorithm and setting the friction efficient as 0.07 and defining the contact option as hard contact. Then contact pairs for every pair of contact surfaces is built. The grid of first surface should be finer than that of second surface to prevent second surface nodes from interfering with the main surface nodes when the first surface contact second surface [8]. In this paper, all shaft sleeves are defined as second surface and all parts and shafts are defined as main surfaces.
W. Shijing et al. / The Analysis of Axial Slippage of the Sleeve
393
2. Meshing and Appling loads
Figure 2.hexahedral cells of sleeve and lever
Figure 3.tetrahedral elements of connecting rod
As to the regular entity structure, such as shaft, shaft sleeve and lever (shown in Figure 2), the 8-node hexahedral reduction element C3D8R is set as the node unit. On the one hand, the regular shape of the hexahedral element has good mechanical properties; on the other hand, the reduction element avoids the possible hourglass effect effectively which decreases accuracy. For components which has complex shapes and surfaces such as connecting rod (shown in Figure 3), a modified 4-nodes tetrahedron element C3D4 is adopted. A 50MP pressure is applied on the torus surface of the joint and a 15MP pressure is applied towards outside on the out end faces of the two dynamic contact rods with defining a 9.81m/s2 gravity acceleration. Finally, with the output options defined, the case is submitted to the Job module to calculate.
394
W. Shijing et al. / The Analysis of Axial Slippage of the Sleeve
3. Analyzing the simulation results
Figure 4.Circuit breaker mechanism stress cloud
Sleeve slippage
Figure 5.before sleeve sliding
Figure 6.after sleeve sliding
With the ABAQUS simulation completing, the motion states of the mechanism and the stress nephogram can be captured, as shown in the figure 4, the results show that the stress mainly is focus on some important components, such as lever, connecting rod, shaft, shaft sleeve. The stress of the right dynamic contact rod is bigger compared with the left one, causing that the shaft sleeve which coordinates with the connecting rod has sliping displacements, as shown in figure 5, 6. By contrasting the dipacement nephogram before and after the sliping, we find that the main reason why shaft sleeve slips is that the shaft sleeve has been applied with great axial tangential force. After the analysis, we conclude that there are some main reasons causing sliping : (1) the shaft that coordinates with the shaft sleeve has insufficient stiffness, causing large deflections along the axial direction under the large pressure from other components, so the shaft sleeve tend to fall off to outside under the component force of the gravity along the axial direction of deformed shaft ; (2) the shaft has insufficient length, so that it cannot coordinate well with the hole of other parts, which causes the frictional resistance decrease and cause the shaft sleeve falls off the hole. Through simulation three main factors affecting the slippage is found, which are the diameter and the length of the shaft that coordinates with the shaft sleeve, and the length of the shaft sleeve. Then the impact of these factors respectively on the slip of the shaft sleeve is tested by means of experiments. Based on the taguchi theory and
395
W. Shijing et al. / The Analysis of Axial Slippage of the Sleeve
methods [9], the high-voltage circuit breaker mechanism is researched as a system. The slip displacement of the shaft sleeve is defined as the output. Three factors listed before as the control factors are defined as variables changing the output of the system with controlling the size by the experimenter [10]. The combination indicated by L9 orthogonal table is selected to finish the taguchi experiment [11]. The diameter of the shaft has three levels: A1=19mm, A2=23mm, A3=27mm; and the length of the shaft also has three levels: B1=55mm, B2=57mm, B3=17mm; the diameter of the shaft sleeve also has three levels: C1=13mm, C2=15mm, C3=17mm. Fill the factors and their corresponding level values in the table, and experiment according to the orthogonal table information for experiments. Table 1. Parameters of Orthogonal table No.
Shaft diameter
Shaft length
Shaft sleeve length
Axial offset
1
A1
B1
C1
0.2
2
A1
B2
C2
0.57
3
A1
B3
C3
0.53
4
A2
B1
C2
0.24753
5
A2
B2
C3
0.7
6
A2
B3
C1
0.94275
7
A3
B1
C3
0.24922
8
A3
B2
C1
0.360133
9
A3
B3
C2
0.678334
According to the calculation above, we can get the effects of shaft diameter, shaft length and shaft sleeve length are 2018, 3417 and 307.We can find the most important factor of axial sliding from the factors’ effect picture is that: shaft length is more prominent than shaft diameter, shaft diameter is also very important, but shaft sleeve length has no effect to the axial sliding. According to the average value of each control factor [12], the combination A3/B1/C3 is chosen as the minimize the average of axial slippage. With the data combination with A3/B1/C3 importing the optimized model into the ABAQUS to simulation, the slippage of sleeve is calculate as 0.1mm. With comparing to the minimum result in orthogonal arrays (0.2mm), it decreased by 50%.
Figure 7.Key factors renderings
396
W. Shijing et al. / The Analysis of Axial Slippage of the Sleeve
4. Conclusion Circuit breaker kinematics model is established by Lagrange equation and using ABAQUS simulation platform. By the means of combination of theoretical calculation and simulation, the stress and displacement nephogram of the circuit breaker's mechanism is obtained, and the reason of the shaft sleeve axial slippage is found: (1) the shaft that coordinates with the shaft sleeve has insufficient stiffness; (2) the shaft has insufficient length. The diameter and length of shaft connected with shaft sleeve as well as the length of shaft sleeve are selected as the optimum level, with which several experiments were conducted using the L9(3X3) design with a mixed orthogonal array using Taguchi method. The parameters as control variables are optimized, reducing the cost of the experiment and achieving the purpose of reducing shaft sleeve slippage at the same time.
References [1] Cui Yan-bing, Simulation of high voltage circuit breaker actuator dynamics based on ADAMS, Mechanical Design and Manufacturing, 04 (2006). [2] Yang Tao, Research on high voltage circuit breaker motor actuator technology, Journal of Tsinghua University (Natural Science) 50 (1993). [3] Wang Guo-qing. Reaserch on dynamic characteristics of deputy campaign plane linkage, Mechanical Design,27(2002) [4] Tian Zhong-wang. The Analysis of dynamic characteristics of the insurance agency based on the finite element method, Science Technology and Engineering,2010 [5] J.M. O'Connell, T.K. Hellen. Shear slip and its influence on the behaviour of a sleeve reinforced circular penetration in a concrete slab, Nuclear Engineering and Design,21(1972): 331-338 [6] R.M Pidaparti, S Jayanti, J Henkle, H El-Mounayri. Design simulation of twisted cord–rubber structure using ProE/ANS-YS, Composite Structures,52(2001): 287-294 [7] I. Smojver, D. Ivančević. Bird strike damage analysis in aircraft structures using Abaqus/Explicit and coupled Eulerian Lagrangian approach, Composites Science and Technology,71(2011):489-498 [8] H.R. Harrison, T. Nettleton. Lagrange's Equations, Advanced Engineering Dynamics, (1997): 21-45 [9] J.Z Wu, W Herzog, M Epstein. Evaluation of the finite element software ABAQUS for biomechanical modelling of biphasic tissues, Composites Science and Technology,31(1997): 165-169 [10] Aykut Canakci, Fatih Erdemir, Temel Varol, Adnan Patir, Determining the effect of process parameters on particle size in mechanical milling using the Taguchi method: Measurement and analysis, Measurement,46(2013): 3532- 3540 [11] Saeed Maghsoodloo,Gultekin Ozdemir. Taguchi methods accepted by industry and studied by academia, Journal of Manufacturing Systems,23(2004):i. [12] Jasbir S. Arora. Chapter 20 - Additional Topics on Optimum Design, Introduction to Optimum Design (Third Edition).(2012):731-784
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-397
397
Product Development Supported by MFF Application Ivan VIDOVIĆa, Mirko KARAKAŠIĆ b, Milan KLJAJINb,1, Jožef DUHOVNIKc, Željko HOČENSKIa a J. J. Strossmayer University of Osijek, Faculty of Electrical Engineering, Cara Hadrijana 10b, 31000 Osijek, Croatia (EU) b J. J. Strossmayer University of Osijek, Mechanical Engineering Faculty in Slavonski Brod, Trg Ivane Brlić – Mažuranić 2, 35000 Slavonski Brod, Croatia (EU) c University of Ljubljana, Faculty of Mechanical Engineering, Aškerčeva cesta 6, 1000 Ljubljana, Slovenia (EU)
Abstract. The paper presents MFF (Matrix of Function and Functionality) application (beta version) that is designed for a wide range of engineers to help in the development of new products or their parts. The basic idea for the application was to create a knowledge base in a specific area that contains the knowledge of engineers acquired during their working life and during product development at one place. The knowledge base is formed automatically by inputting the individual characteristics of the product. By means of this characteristic it is possible to generate MFF. After the calculation of the matrix, the user can make modifications so as to reach the final solution of the concrete problem. Keywords. Knowledge base, MFF (Matrix of Function and Functionality), product development
Introduction Because of the complex nature of the modern technology, it is now rarely possible for an individual to tackle the design and the development of a major new product singlehanded. In order to increase the probability of the success of a new venture, the design process must be planned carefully and executed systematically. In particular, an engineering design method must integrate the many different aspects of designing in such a way that the whole process becomes logic and comprehensive. In order to solve a technical problem we need a system with a clear and easily reproduced relationship between inputs and outputs [1]. With the purpose of solving technical problems different methods were developed [2]. The most easily generalized method among these methods is the morphological technique by Zwicky [3, 4], which concerns itself with the intrinsic structural characteristics of the formation and the content of the thought process. Models were presented in [5, 6, 7] matrix, which enables the generation of a functional structure of
1
Corresponding Author: Full professor Milan Kljajin, E - mail:
[email protected], Tel.: 00385 35 493 415, Fax: 00385 35 446 446
398
I. Vidovi´c et al. / Product Development Supported by MFF Application
the product, described in matrices. The background of most matrix models is represented by morphological box [3], which forms the basis for further development. In order to generate product’s shape structures, new functional structures are essential. To generate them, important philosophies of engineering of technical functions must be considered [8, 9]. In [10] the authors approach to describing the functions by defining the terminology that is related to the names of the functions, while others describe the functions of technical systems by means of physical laws [11]. With a view to the unique identification, rules were defined [5], by means of which functions, functionalities and products are described. Market requirements are the basis for defining basic functional requirements, which in turn represent the initial information on a new potential product [12, 13]. At the beginning of the design process, functional requirements are usually unarranged, incomplete and sporadically presented, which makes them necessary to be arranged, complemented and expanded. By means of the structural enlargement of functions, the product structure can be presented as a functional structure, which is at the same time the basis for defining the shape (physical structure) of the product [14]. Research and development activities within the product-development process have their own characteristic and distinctive features, dominated by unpredictability, creativity, mentality and abstraction. Due to these features it is difficult to thoroughly describe, develop and implement the design process in the initial phases of the computer-tools development [14, 15].
1. Matrix of the function and the functionality application model The MFF application (beta version) is designed as a computer tool for a wide range of engineers to help in the development of new products or their parts. The matrix of product function and its requirements was developed as a prototype to the MFF [6, 16]. The basic idea for the application was to create a knowledge base in a specific area that contains the knowledge of engineers acquired during their working life and during product development at one place. The knowledge base is designed as an extensible base which will be expanding by every new designed product and in that way it becomes better and more useful. In addition to engineers beginners, this knowledge base is also very useful for experienced engineers, because it allows them to have available their previous products solutions and the products solutions of other experienced engineers at one place. In this way an engineer can easily and quickly resolve problems encountered during the new product development and its design. The knowledge base is formed automatically by inputting the individual characteristics of the product, which is determined by main, supplementary, auxiliary and binding functions [5]. Each product comprises records of at least one main, one or more auxiliary and complementary functions, and at least two or more connecting ones. Functions have also been described with the physical values of [17, 18]. The rules for the entry into the knowledge base functions are determined and they are built into the system [5]. The system, which is now presented for the first time is completely interactive. The knowledge base can be reviewed and the entered functionality is built on the principle of a self-learning system. The system has a built-in decision tree and recognition of each function so that the percentages to calculate the satisfaction of each
I. Vidovi´c et al. / Product Development Supported by MFF Application
399
function are calculated. After the calculation of the matrix, the user can make modifications so as to reach the final solution of the concrete problem.
Figure 1. MFF Application flow chart.
The paper presents further research of separate cases that essentially add up to the development process by means of MFF on the principle of basic starting points: ordering and determining the functionality for the advance fixing solution, determination and classification of columns and rows of the matrix in the block matrix, removing and adding new functional requirements and the functionality - selfgenerative system product development, updating the MFF matrix with new functionality in the knowledge base, creating, setting and deleting the concept of the project in the R & D process and marking the best solutions for each functional request. Based on the development of the MFF system, we have found that it can be used for the system analysis or for the prediction of a new family of products. We wish to present the idea that with this system of the MFF matrix we can also involve the
400
I. Vidovi´c et al. / Product Development Supported by MFF Application
products from the market which have been already presented and have already obtained the standard quality for some functions. Thus, each user can build his base of standard semi-finished products, which is an essential advantage of our affordable system analysis. Figure 1 shows the flow diagram of the MFF applications from which is evident that there are two ways to work with the application. One way is filling the knowledge base, and the other one is to use the knowledge base to create new products. When using the application it is possible to simultaneously use both modes. It is possible to add new functionality during the development of new products. Using the MFF applications and the knowledge base can be entered browsing functionality, and may be entering the functional requirements of a new product (product parts) for which the application finds the proposed solutions in the form of functionality entered into the database (creating the project). For each inserted functional requirement the application calculates the percentage by which the functionality of the base solves the problem. Suggested solutions for each function request take the form of matrix functions and the functionality (MFF), which rows represent the functional requirements of the user, while columns represent functionality that addresses the requirements of a percentage greater than the threshold that the user specifies. Usually the threshold value is set to zero, so that the matrix contains all the functionalities in any percentage that solve any of the set of functional requirements. After creating the MFF, the user can do modifications to get the final solution of the problem.
1.1. The MFF concept The MFF is a method, which represents a tabular representation of bindings between function requests and functionalities (Figure 2) [19]. It is built and defined on the basis of a mathematical model [17, 18] and pre-set rules [5], not just on the basis of the design intuition. Functional requests are derived from market requirements and they represent the most important attributes of the requested system – functions, while functionalities are represented by the technical systems [9] or shape models that in the part or in a whole fulfil the required functions.
Figure 2. Bindings between function requests and functionalities [19].
I. Vidovi´c et al. / Product Development Supported by MFF Application
401
The MFF is used when we wish to improve the initial engineering process, where only the basic information is available. It represents a tool that enables to concurrently solve several functional requirements and generate new functional and design structures of a product in the preliminary phase. In the Figure 3, which represents a detailed MFF model, we can see that functions or functional requirements are generally marked with Fi and that they are placed in the first column, while individual technical systems (functionalities) are marked with TSj and can be found in subsequent columns. The links between the functions and the functionalities that solve them are created by means of the so-called sub-matrices. These sub-matrices are coloured and highlighted in grey (Figure 3).
Figure 3. MFF with sub-matrices [18].
2. Implementation In order to present the MFF on concrete products in practice, the MFF method has been tested, evaluated and implemented on several products from different fields of interest. In this paper we will evaluate two products. The first product is a “tricycle” of which main function is defined as a “3-point supported transport system driven by human energy”. The whole tricycle assembly represents a very complex assembly of which product structure consists of more subassemblies, defined as: the “seat positioning assembly”, the “steering assembly”, the “drive train assembly”, the “framework assembly” and the “braking assembly”.
Figure 4. MFF concept of the “Seat positioning assembly”.
402
I. Vidovi´c et al. / Product Development Supported by MFF Application
We will concentrate and show our evaluation only on one of the sub-assemblies, more precisely the sub-assembly “Seat positioning assembly”. The seat positioning assembly (Figure 4) is one of the assemblies which most importantly define tricycle ergonomics, the aesthetics and the usability of the whole product. Its main function is defined as a “Regulating settings of seating and driving positions”. The concept (Figure 5) we designed using the MFF was defined on the basis of the two most general functional requirements: the “Acceptance of driver’s mass” and the “Vibration reduction”. According to those two inputs, the MFF was asked to generate new possible solutions and technical systems in accordance to the methodology. It suggested to solve ܨdefined as an “Acceptance of driver’s mass” with three ܶܵ s named “Seat”, “Wheel” and “Rubber spring” however because the power of solution (in percentage) is higher on the “Seat” (100% > 25%) it is preferred to use this one. Anyhow, this is not a rule, but can help a designer to distinguish the power and the importance of solutions. Percentages are not the only measurement of the solution quality. In accordance with the ܨ, every one is as a rule described with parameters, which can be considered when evaluating solutions. For the second ܨdefined as a “Vibration reduction” the MFF purposed a “Cylindrical spring” with the solution power of 100%. This is the perfect solution with the highest possible ranking, so it will most probably be suitable for solving the ܨǤ
Figure 5. The MFF of the concept with the corresponding ܨs and ܶܵݏ.
The second product is a “device to displace the car” (Figure 6) of which main function is defined as a “displacement of transportation systems”. The concept designed by the MFF was defined on the basis of four functional requirements: the “Rotation around vertical and horizontal axis”, the “Carrying the load”, the “Profile movement” and the “Motion transmission” (Figure 6).
Figure 6. The device to displace a car.
I. Vidovi´c et al. / Product Development Supported by MFF Application
403
According to those four inputs, the MFF (Figure 7) generated new possible solutions and technical systems. It suggested to solve a function “Rotation around vertical and horizontal axis” with four ܶܵs named “Wheel”, “Beam”, “Transport wheel” and “Folding profile”. Because the power of solution (in percentage) is higher on the “Transport wheel” (100 % > 75 %) it is preferred to use this one. For the second ܨ defined as a “Carrying the load” the MFF purposed the “Beam” with the solution power of 100 %. This is the perfect solution with the highest possible ranking, so it will most probably be suitable for solving the ܨǤ For the third F defined as a “Profile movement” the MFF purposed a “Arm mechanism” with the solution power of 100 %. This is the only solution generated in the MFF for this function. For the fourth F defined as a “Motion transmission” the MFF purposed two ܶܵ s named “Arm mechanism” and “Folding profile”. Because the power of solution (in percentage) is higher on a “Folding profile” (100 % > 50 %) it is preferred to use this one.
Figure 7. MFF solutions with the corresponding ܨs and ܶܵݏ.
3. Conclusion The MFF upgrades and updates the deficiencies of the morphological matrix through the application of mathematically - not intuitionally based model for creating links between the function and the functionality. The MFF model allows a concurrent solving of several open functional requirements, which recognizes requirements for the productivity, the clear recognition of generators, the binders and the information users, and the reduction of the design and the development times. According to the complexity of these products, different solutions and possibilities are possible to be found. Some solutions are very promising and give us new insights to the future, others are contrary defined, which we managed to solve and understand through the MFF self-verification and there are also few that are not presented at all, which gave us the opportunity to look for errors or missing functions, functionalities. Future work will include upgrading of the application in a way that when you add the functionality with the same name, each functionality gets a serial number to distinguish each other. The addition of recommendations of most acceptable solutions for the functional requirements of a given criterion is planned as well (a solution that will be chosen by few engineers / users of application, a solution that will be the result of the work of all previous user applications, a solution that represents the best solution to the calculated percentages, etc.). This system will be a system of the artificial intelligence that with its recommendations could facilitate the user to select the final
404
I. Vidovi´c et al. / Product Development Supported by MFF Application
solution set of functional requirements and the final decision will have to be based only on the calculated percentages.
References [1] G. Pahl, W. Beitz, Engineering Design – A Systematic Approach, Springer-Verlag, London, 1996. [2] C. Hales, K. Wallace, Design research reflections – 30 years on, International Conference on Engineering Design, ICED 11, København, 2011. [3] F. Zwicky, The morphological method of analysis and construction, Courant, New York: Intersciences, 1948. [4] M. Kljajin, Ž. Ivandić, M. Karakašić, Z. Galić, Conceptual Design in the Solid Fuel Oven Development, In Proceedings the 4th DAAAM International Conference on Advanced Technologies for Developing Countries, Slavonski Brod, 2005, 109-114. [5] Ž. Zadnik, M. Karakašić, M. Kljajin, J. Duhovnik, Function and Functionality in the Conceptual Design process, Strojniški vestnik 55 (2009), 455-471. [6] M. Karakašić, Ž. Zadnik, M. Kljajin, J. Duhovnik, Product Function Matrix and its Request Model, Strojarstvo 51 (2009), 293-301. [7] M. Karakašić, Ž. Zadnik, M. Kljajin, J. Duhovnik, Functional structure generation within multistructured matrix forms, Tehnički vjesnik / Technical Gazette 17 (2010), 465-473. [8] W. Houkes, P. E. Vermaas, Technical Functions: On the Use of Design Artefacts, Springer Science+Business Media B.V., 2010. [9] V. Hubka, W. E. Eder, Theory of Technical Systems, Springer-Verlag Berlin, Heidelberg, 1988. [10] J. Hirtz, R. B. Stone, D. A. McAdams, S. Szykman, K. L. Wood, A Functional Basis for Engineering Design: Reconciling and Evolving Previous Efforts, NIST Technical Note 1447, Department of Commerce United States of America, National Institute of Standards and Technology, 2002. [11] R. Žavbi, J. Duhovnik, Conceptual design chains with basic schematics based on an algorithm of conceptual design, Journal of Engineering Design 12 (2001), 131-145. [12] J. Kušar, J. Duhovnik, R. Tomaževič, M. Starbek, Finding and Evaluating Customers Needs in the Product-Development Process Strojniški vestnik 53 (2007), 78-104. [13] J. Duhovnik, J. Kusar, R. Tomazevic, M. Starbek, Development Process with Regard to Customer Requirements, Concurrent Engineering, 2006. [14] T. Kurtoglu, A Computational Approach to Innovative Conceptual Design, PhD Thesis, The University of Texas at Austin, 2007. [15] L. J. Ball, Design requirements, epimistic uncertainty and solution development strategies in software design, Design Studies, Elsevier 31 (2010). [16] M. Karakašić, Ž. Zadnik, M. Kljajin, J. Duhovnik, Design solutions with Product Function Matrix and its Request, International Design Conference, Design 2010, Dubrovnik, Croatia. [17] M. Karakašić, Model povezivanja funkcija proizvoda, parametara i njihovih intervala vrijednosti kod razvoja proizvoda, primjenom matrice funkcije i funkcionalnosti, Doktorski rad, Sveučilište J. J. Strossmayera u Osijeku, Strojarski fakultet u Slavonskom Brodu, Slavonski Brod 2010. [18] Ž. Zadnik, Matrika funkcij in funkcionalnosti izdelka v razvojno konstrukcijskem procesu, Doktorsko delo, Univeza v Ljubljani, Fakulteta za strojništvo, Ljubljana 2012. [19] J. Duhovnik, Ž. Zadnik, Preliminary design within CE using the MFF, Concurrent engineering yesterday, today and tomorrow, 18th ISPE Conference on Concurrent Engineering – CE2011, MITBoston, USA, 2011.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-405
405
City-Product Service System: a Multi-scale Intelligent Engineering Design Approach ZaiFang ZHANGa,1, Egon OSTROSIb, Alain-Jérôme FOUGÈRESb,d, Jean-Bernard BLUNTZERc, Yuan LIUa, Fabien PFAENDERe,f, MonZen TZENf a School of Mechatronic Engineering and Automation, Shanghai University, Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,200072, P.R.China e-mail:
[email protected] b Laboratoire IRTES-M3M, Université de Technologie de Belfort-Montbéliard, France c Laboratoire IRTES-SET, Université de Technologie de Belfort-Montbéliard, France e-mail: {egon.ostrosi, jean-bernard.bluntzer, alain-jerome.fougeres}@utbm.fr d ESTA, School of Business and Engineering, Belfort, France e Université de Technologie de Compiègne, France f UTSEUS, ComplexCity Laboratory, Shanghai University, P.R.China e-mail:
[email protected];
[email protected]
Abstract. There is a grown interest to smart cities of the future with natural interaction between the body of the city and the different product using this body. This paper proposes a new approach that seeks to bridge and integrate the Product Service System and City Service System. Product service system engineering approaches driven by the performance criterion can deliver the right services of the product all the time. However, in the context of the city, the product service engineering should consider the body of the city and city service engineering should consider the product which uses the body of the city. Therefore, a new system service emerges: City-Product Service System engineering. It opens a new emergent area of intelligent sustainable engineering design. Keywords. City design, Service Engineering, Product Service System, City Service System
Introduction The industrial urbanization of the society has transformed cities into the most complex and most dynamic man-made systems. This undo transformation has created a new economic paradigm, the sustainable development [1, 2, 3, 4]. The new paradigm is spreading new knowledge, values, and practices [5]. Sustainable development should offer a holistic way of resolution of three main sharp objective conflicting goals of the life of the city: promoting economic development of city, environmental protection of city, and advocating social justice in city [1]. Sustainability has been also defined as the opposite of crisis. If crisis is defined as the inability of a system to reproduce itself, then sustainability is the opposite: the long-term ability of a system to reproduce. This
1
Corresponding Author.
406
Z.F. Zhang et al. / City-Product Service System
criterion is quite interesting. It is the essential criterion of the life. This criterion applies to natural ecosystems, to economic and to social systems. Designers can engage the current challenge of sustainable development applying also the “the long-term ability of a system to reproduce” criterion. Multi-scale engineering design of sustainable of cities integrating sustainable products and resolving multidimensional conflicts for finding creative engineering design solutions can be identified as a research engineering design problem. City-Product Service Systems (CPSS) can be considered as new emergent area of intelligent sustainable engineering design. This paper proposes a new method for the multi-scale intelligent engineering design of city-product service system. City is considered and conceived as an evolving living body in complex interaction with its citizens, its artificial physical environment, and its natural physical environment. Products belong to artificial physical environment of city. Firstly the paper analyzes the issue of product service system engineering design. Secondly, the paper extends the concept of the service engineering to the city. The product service system engineering and city service system engineering should be capable of adapting themselves to the new situations: from city to the product and from the product to the city. It is an important characteristic of intelligent models. Adaptive services are much more likely to emerge if they are composed of elements whose existence, by itself, enhances the probability of success, survival and growth of their population in the future. Likewise, these complex adaptive services themselves are more likely to survive if they self-sustain. Self-sustain is the long-term ability of a service to reproduce. City-product service systems should be self-sustained objects. Finally the paper shows how the city-product service system needs to be designed such that they are both self-sustain, self-stand and suited for integration.
1. Product Service System Engineering The word “service” has a great richness and diversity of meaning [6]. Many existent concepts, definitions and properties of services given from the marketing perspective, consider the service as a production-consumption process or activity. The definition of the concept of service directly conducts service engineering research. In value engineering design, the concept of service has been considered as equivalent to the concept of function [7]. From user point of view, functions are services to be provided by the artifact fulfilling user's needs. For instance, a manufacturer of engines is marketing energy conversion artifacts. These are core artifacts related to core services. Overall, the concept of function is central in engineering design. It bridges the user’s need with the design parameters of the artifact, which in its turn, should be bridged with manufacturing and production process [8]. It means that the artifact manufactured according to our design should always deliver services to the user which it is supposed to deliver. The goal is deliver the right services and deliver the services right all the time. This challenge goal is inherent in the service paradigm. Delivering the services right all the time means low variation in service performance. It is also a matter of survival for an organization. Thus, competitive pressures force organizations to offer or to integrate additional services to the core services. The fundamental objective of the additional services is to support the core
407
Z.F. Zhang et al. / City-Product Service System
services, i.e. artifact functions. Table 1 shows a proposed taxonomy of artifact service engineering. The design of superior integrated services, making it possible to deliver core services right all the time and which can perform highly consistently despite external disturbances and uncertainties, is the goal of integrated service engineering. Table 1: Artifact Service Engineering Taxonomy Artefact Product
Service Fonctions Services to be provided by the product fulfilling a user's needs. They are Product Services.
Integrated ProductProcess
Product Services which integrates Process Services to enhance Product Services.
Integrated ProcessProduct
Process Services in relation with Product Services
Process, Activity
Services to be provided by PROCESS fulfilling a user's needs. They are Process Services.
Generated Value New value emerges from: - the need of customers to find PRODUCT in the market, - the need of vendors to access new target market. New value emerges from: - the need of customers to find integrated PRODUCTPROCESS in the market; - the need of vendors to access new target market.
New value emerges from: - the need of customers to find PROCESS to satisfy their need for a PRODUCT in the market, therefore to find suppliers; - the need of vendors to access new target market. New value emerges from: - the need of customers to find PROCESS in the market, therefore to find suppliers, - the need of vendors to access new target market.
Ownership Customer buys the Services of the Product. Customer is the ownership of the product.
Physical characteristics Material, bodied, tangible, corporeal.
Example Car
Customer buys the Services of the Product and the services of the Process. Customer is the ownership of the product and is not the ownership of the process. Customer buys the Services of the Process and the Services of the Product. Customer is not the ownership of the product and is not the ownership of the process.
Integrated Material, bodied, tangible, corporeal artifacts, with immaterial, bodiless, intangible, disembodied objects.
Car and Maintainability.
Immaterial, bodiless, intangible, disembodied artefacts in relationship with material, bodied, tangible, corporeal artifacts.
Car hire, Electrical Energy Subcontracting
Customer buys the Services of the Process. Customer is not the ownership of the process.
Immaterial, bodiless, intangible, disembodied artefacts.
Consulting
408
Z.F. Zhang et al. / City-Product Service System
2. City Service System Engineering The city phenomenon meets the concept of level and scale. The reading and the interpretation of the city phenomenon is certainly multidisciplinary. In this paper, we adopted the city phenomenon analysis from three levels [9]: global, mixed and private. The global level, at the same time social, logic and strategic, is that of the political functions of the space of the city in time. Physically, the global level is the place of the governance. The private level is the one of the inhabit function. Physically, the private level concerns the dwelling. It is the place of the primary relations. The mixed level is the one of double functions: function of the city with regard to the surrounding territories and internal function of the city. Physically, the concerned space represents what stays in the city after removing the global space and the private space. It is the place of interactions of the functions of the global level with the functions of the private level. It means that city should consider the needs of different users, which often are in conflict. It means also that the city should always deliver services to the different users which it is supposed to deliver. It should support a fairer more inclusive society. The goal is deliver the right services and deliver the services right all the time to all users by the same artifact: city. This is the challenge goal of City Service Engineering paradigm. For instance, listening and understanding the transport and mobility needs of all users, defining the services to be provided by artifact fulfilling different user's needs can change the strategic focus of artifact design and development, for instance multiple types of roads. Integrated-distributed city services produce the city into city structures characterized by global, mixed, private functional levels (Figure 1).
Figure 1: City into City: each city is characterized by global, mixed, private functional levels.
3. City-Product Service System: a conceptual framework Today, there is a grown interest to smart cities of the future with natural interaction between the body of the city and the different products using this body. But it lack of a
Z.F. Zhang et al. / City-Product Service System
409
systematic engineering approach [11] which can consider in the same time the dynamic service relationship between the city and the product. Product service system engineering approaches driven by the performance criterion [10] can deliver the right services of the product all the time. However, in the context of the city, the product service engineering should consider the body of the city and city service engineering should consider the product which uses the body of the city. Therefore, a new system service emerges: City-Product Service System engineering. The proposed approach uses the mapping [11] as principle for designing successful city-product system services. The approach then can be divided into following domains:
Figure 2. Architectural of City-Product Service System.
Identification of long term patterns of needs: The daily needs of the users are provided by products in global, mixed or private spaces of the city. In order to design a city-product system, designer should understand on the long rather than short term: the interaction between the user and the space of the city; the interaction between the user and the product and interaction between product and the space of the city. From these interactions can be discovered the services relationship between the user and the city– product service system. This implies the designer should design simultaneously the space city-product service system. City-Product System: The interaction of the products with the body of the city can be conceptualized in the three levels of the space: global, mixed and private. Here, we are concerned with the co-design of the spaces and the products. The design should consider also the question of self-adaptive systems. Intelligent interaction in City-Product System: It requires new ways of interacting with computation between city and the products, which can use or use already its body. The agent paradigm can be quite straightforwardly applied to handle uncertain problems where global knowledge is inherently distributed and shared by a number of agents, aiming to achieve a consensual solution [12] in a collaborative way [13]. Agents are autonomous and distributed entities capable of developing tasks either by themselves or by collaborating with other agents [14]. An agent is a computer entity,
410
Z.F. Zhang et al. / City-Product Service System
located in an environment that it can observe, in which it can decide and act, possibly composed of other agents with which it can interact in an independent way [15]. 6JG NKVGTCN FGHKPKVKQP QH VJG KPVGTCEVKQP KU nTGEKRTQECN CEVKQP QH VYQ QT OQTG RJGPQOGPC+POWNVKCIGPVU[UVGOUCUKPJWOCPQTICPK\CVKQPUCEVKQPUKPVGTCEVKQPU CPFEQOOWPKECVKQPUCTGENQUGN[NKPMGFCPFKPVGTFGRGPFGPV+PVGTCEVKQPKUCPGZEJCPIG DGVYGGP CIGPVU CPF VJGKT GPXKTQPOGPV 9G ECP FKUVKPIWKUJ VJG HQNNQYKPI KPVGTCEVKQPU
(KIWTG C EQOOWPKV[ QH CIGPVU GODGFFGF KP RTQFWEV ECNNGF RTQFWEV DGJCXKQWT CIGPVUCPFVJGRTQFWEVDCIGPVUGODGFFGFKPEKV[ECNNGFEKV[DGJCXKQWTCIGPVUCPF VJGEKV[ERTQFWEVRTQEGUUCIGPVU HQTKPUVCPEGOCKPVCKPCDKNKV[ECNNGFRTQFWEVUGTXKEG CIGPVU CPF VJG RTQFWEV F EKV[ RTQEGUU CIGPVU EKV[ UGTXKEG CIGPVU CPF VJG EKV[ G DGVYGGP KPVGTHCEG CIGPVU CPF WUGT CPF H DGVYGGP VJG EQOOWPKVKGU 6JGUG GZEJCPIGU FGRGPFQPVJGKPVTKPUKERTQRGTVKGUQHVJGRTQFWEVCPFVJGEKV[KPYJKEJCIGPVUCTGCEVKXG 6JGRGTEGRVKQPQHCIGPVUOC[DGRCUUKXGYJGPTGEGKXKPIOGUUCIGUUKIPCNUQTCEVKXG YJGPKVKUVJGTGUWNVQHXQNWPVCT[CEVKQPU%QOOWPKECVKQPKUCPGZEJCPIGDGVYGGPVJG CIGPVUVJGOUGNXGUWUKPICNCPIWCIG %QOOWPKECVKQP KP CP CIGPVDCUGF U[UVGO ECP DG RGTHQTOGF KP VYQ OQFGU CFFTGUUGF EQOOWPKECVKQP VQ YJKEJ C UGPFGT CIGPV UGPFU C OGUUCIG VQ QPG QT OQTG CIGPVU TGEKRKGPVU YJKEJ EQTTGURQPFU VQ VJG OQFGN QH 5JCPPQP VJG DCUKE WPKV KP VJKU EQOOWPKECVKQP KU VJG URGGEJ CEV WPCFFTGUUGF EQOOWPKECVKQP KP YJKEJ C UGPFGT CIGPV UGPFU C OGUUCIG VQ CNN CIGPVU CXCKNCDNG VQ VJG CRRNKECPV KP VJG GPXKTQPOGPV
YKVJQWV TGEKRKGPVU PCOGF Diagrammatic representation of concepts of actions, interactions and communications is given in Figure 3. Representation of interactions
Representation of action : rolera
:env
: rolera
: rolerb interactiona-b()
Representation of communication : rolera
: rolerb
{communication_acta-b()}
actioni() interactionb-a()
{communication_actb-a()}
Figure 3. Diagrammatic representation of concepts of actions, interactions and communications
+H VJG KPVGTCEVKQPU DGVYGGP CIGPVU CTG HTGSWGPVN[ EQOOWPKECVKXG VJG[ KPXQNXG EQQRGTCVKQPCPFEQQTFKPCVKQPQHCEVKQPU6JGCIGPVQTKGPVGFEQQTFKPCVKQPOQFGNUHQEWU QPVJGDGJCXKQWTQHCIGPVUKPQTFGTVQCEJKGXGCEQQTFKPCVGFU[UVGO +PC%KV[2TQFWEV 5GTXKEG5[UVGOVJGUGKPVGTCEVKQPUECPDGHW\\[ WPEGTVCKPKPEQORNGVGCODKIWQWUCPF TCPFQOA fuzzy interaction ιi ∈ Ι between two fuzzy agents α~ s and α~r is defined by the following tuple (3):
ιi =< α s , α r , γc >
(1)
where α s is the fuzzy agent source of the fuzzy interaction, α r is the fuzzy agent
destination of the fuzzy interaction, and γc is a fuzzy act of cooperation. A cooperative act is consistent with the model of 5Co defined in [16]: it belongs to the set {Communication, Coordination, Co-production, Co-memory, Control-Process} and has a goal.
Z.F. Zhang et al. / City-Product Service System
411
The challenge of the proposed conceptual framework is the intelligent interaction between users, products and spaces of city. Discovering the long term needs of the people and the reoccurring services in interaction user-product-city system should allow identifying the functional requirements of the smart city-product system.
4. A scenario of a prototype: City-Car System Service engineering The need of different users for safe, effective and efficient movement has been identified. Priority should be given to the maintaining and managing of the city–car system. The maintaining and managing should deliver the right services and deliver the services right all the time. This is typically a City-Car System Service engineering scenario (Figure 4).
Figure 4. Architecture City-Car Service System
The interaction between the user and mixed space of the city (for instance: circulating space, intelligent road), the interaction between the artefact (for instance: non motor vehicles, motor vehicles, etc.) and user, and the interaction between the mixed space and the artefact can been identified (Figure 5). The purpose is then to codesign the city – car system.
412
Z.F. Zhang et al. / City-Product Service System
Figure 5. A view of virtual simulation of the City-Car Service System
The intelligent decision of what is acceptable by the city–car system involves striking a balance between traffic capacity, the environment, speed, safety and city-car user comfort. The city–car system should resolve conflicts and accommodate the competing demands made upon it.
5. Discussion and Conclusion In this paper, we present a new concept in service engineering paradigm. It is CityProduct Service System Engineering. In terms of conceptual framework, we can see an emerging vision of the future of the architectural computing in City-Product integrated environment. City, a living body, should be considered not only in the design of the products that uses its body, but it should be active in the interactions with these products. The most important claim is that city should be designed to have the decisional capabilities during these interactions. It implies that the users should understand that fulfilling their needs should strongly be related with the fulfilling of city “needs”, i.e. the “needs” of a living body. City-Product Service System Engineering can improve the sustainability and the eco-efficiency. It can enable new ways of designing what we call the "City-ProductService" that satisfy users needs and also city’s needs. The knowledge of City-Product Service System can make possible both governments to formulate policy with respect to sustainable City-Products, and companies to discover directions for innovation. The knowledge City-Product Service System also opens new area for engineering education. Moreover, specific implementing strategies and methods, knowledge mining and management system and detailed case study and original application scenario will be discussed in the further study.
Z.F. Zhang et al. / City-Product Service System
413
Acknowledgements This work is carried out as the part of the ComplexCity project. The authors should like to thank National Natural Science Foundation of China (No.51205242), Shanghai Science and Technology Innovation Action Plan (No.13111102900) and Shanghai University for supporting the project.
References [1] S. Campbell, Green cities, Growing cities, Just cities? Urban planning and the contradictions of sustainable development, Journal of the American Planning Association, 62(3) (1996), 296-312. [2] M.E. Kahn, Green Cities, Brookings Institution Press: Washington, DC, USA, 2006. [3] E. Glaeser, Triumph of the City, Penguin Press: New York, NY, USA, 2011. [4] A.R. Edwards, The Sustainability Revolution: Portrait of a Paradigm Shift, New Society Publishers, Gabriola Island, Canada, 2005. [5] Burns, T.R., The Sustainability Revolution: A Societal Paradigm Shift, Sustainability, 4(6) (2012) 11181134. [6] N. Johns, What is this thing called service?, European Journal of Marketing, 33(9/10), (1999) 958–973. [7] L. Miles, Techniques of Value Analysis and Engineering, McGraw-Hill, New York, 1961. [8] N. Suh, Principles of Design, Oxford University Press, New York, 1988. [9] H.Levebre, La revolution urbaine, Gallimard, Paris, France, 1970. [10] Z.F Zhang, Conceptual design of product service systems driven by performance, Proceedings of 19th International Conference on Engineering Design (ICED13), Seoul, Korea, 19-22 August 2013, 269-279. [11] E. Ostrosi, F. Pfaender, D. Choulier, A.-J. Fougères and MZ. Tzen, Describing the engineering modeling knowledge for complexity management in the design of complex city , Proceedings of the 19th International Conference on Engineering Design (ICED13), Seoul, Korea, 19-22 August 2013. [12] E. Ostrosi, L. Haxhiaj and S.Fukuda, Fuzzy modelling of consensus during design conflict resolution, Research in Engineering Design, 23(1) (2012), 53-70. [13] E. Ostrosi, A.-J. Fougères and M. Ferney, Fuzzy agents for Product Configuration in collaborative and distributed design process, Applied Soft Computing, 12(8) (2012), 2091–2105. [14] J. Ferber, Multi-agent systems. An introduction to distributed artificial intelligence, Addison Wesley, London, 1999. [15] N.R. Jennings, On agent-based software engineering, Artificial Intelligence 117 (2000), 277–296. [16] V. Ospina and A.-J. Fougères, Agent-based Mediation System to Facilitate Cooperation in Distributed Design, WSEAS Transactions on Computers, 6(8) (2009), 937-948.
414
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-414
Modularity: New Trends for Product Platform Strategy Support in Concurrent Engineering a
Egon OSTROSI a,1, Josip STJEPANDIĆ b, Shuichi FUKUDA c, Martin KURTH d Laboratoire IRTES-M3M, Université de Technologie de Belfort-Montbeliard, France, b PROSTEP AG, Darmstadt, Germany, c Stanford University, USA, d RAYCE EURL, Lörrach, Germany
Abstract. Modularity intersects technical aspects with the business aspects. This paper analyzes modularity from this intersection point of view. It involves design for modularity as well management of modularity. Methods for supporting modular design are analyzed in relationship with technologies and tools for modular design. The current trend is toward usage and integration of different technologies such as advanced CAD systems, product configurators, agent-based systems and PDM systems. Development of intelligent models and intelligent tools as well as the development of intelligent modular products (i.e. intelligent system: model-tool-product), which can communicate and cooperate, demands the design of more intelligent organizations of modular design. Development of intelligent model-tool-product systems needs the development of holistic and concurrent engineering approaches. These approaches can offer the possibility of the design of intelligent self-sustainable models and intelligent self-sustainable products. Keywords. Modularity, Modular Design, Product Variety, Mass Customisation, Product Platform, Product Configurator.
Introduction Through the development of concepts and a body of knowledge, modularity has become an area worthy of study in its own right. It can be considered that the roots of modularity can be derived from human cognitive abilities [1]. The definition of product modularity is related to the criteria of component separability and component combinability in the domain of tangible assembled artifacts. Autonomy or independence towards external, dependence towards the internal is an important characteristic of modules. In context of concurrent engineering, modularity combines technical aspects with business aspects, both from a qualitative and a quantitative viewpoint. Technically, products can be understood as a network of components that share technical interfaces (or connections) in order to function as a whole. Component modularity is defined based on the lack of connectivity between components. Modules are thus encapsulated groups of similar interconnected physical components which 1
Corresponding Author, E-mail:
[email protected]
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
415
operate a flow of energy, material or information to perform a set of functional requirements. Minimization of interactions with external components and maximization of interactions between the components within the module are thus principles for finding modules. Technically, it can be expressed with three measures: (a) how components share direct interfaces with adjacent components, (b) how design interfaces may propagate to nonadjacent components in the product, and (c) how components may act as bridges among other components through their interfaces. From the business point of view, modularization has three purposes: (a) to make complexity manageable, (b) to enable parallel work, and (c) to accommodate future uncertainty [2]. The impact of modularity to the financial and organizational structure of an industry can be described with three aspects: (1) Modularity is a financial force that can change the structure of an industry; (2) The value and costs associated with constructing and exploiting a modular design are explored; (3) The ways in which modularity shapes organizations and the risks that it poses for particular enterprises are examined. Modularization in enterprise leads, thus, to the disaggregation of the traditional form of hierarchical governance. The enterprise is decomposed into relatively small autonomous organizational units (modules) to reduce complexity and to integrate strongly interdependent tasks while the interdependencies between the modules are weak. The dissemination of modular organizational forms yields a strong process orientation: the complete service-provision process of the business is split up into partial processes, which can then be handled autonomously by cross-functional teams within organizational subunits. Modularity can thus be considered as a powerful concurrent engineering concept intersecting technical and business aspects, in the one hand, and qualitative and a quantitative viewpoints, on the other. This paper analyzes the modularity following this intersecting concept. It involves the designs for modularity as well the management of modularity (section 1). The methods for supporting the modular design are analyzed (section 2) in relationship with the technologies and tools for modular design (section 3). Two industrial applications (plant design, aerospace) are also analyzed (section 4) in relationship with technologies. The paper proposes some new trends for modularity (section 5) and concludes with respect to the future of modularity from a CE perspective (section 6).
1. Modularity: Design and Management From our point of view, modularity is a concept which intersects design and management. 1.1. Modular Design Modular design considers functions, properties and interfaces of product constituents. Standard interfaces make parts interchangeable, thereby reducing the expenditure for the combination of different product constituents. Modular design usually involves the following processes: (1) the identification of product architecture and reusable components (building blocks) from existing products, (2) the agglomeration and adaptation of singular building blocks into modules to derive a new design, and (3) assessment of product performance and cost. Modular product architecture is generated
416
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
by deriving a rule base (scheme) for the mapping of product functions to physical components. For the utilization of modules comprehensive interfaces become crucial. Three basic types of modular architecture are defined, namely slot, bus and sectional, according to the interfaces between components [3]. Platforms as a special expression of a modular design are of particular relevance for an industrial practice. A platform is a standardized base product with fundamental functions and properties of the total product, on which a variety of similar products can be efficiently built by using subsystems, modules and components. In the platform the architecture and the interfaces to optional elements are included, which are used for differentiation of the end products. 1.2. Mass customization, variety and configuration Under the term “Mass Customization” a business strategy is defined that utilizes modular design for complex offerings of products and services that are configured on demand to achieve the best fit with customer-specific needs [4]. Mass customization joins two concepts that are usually supposed to be opposite: mass production and customization including two approaches: mass and craft (single-piece) production. Mass production manufactures low cost products by reaping the benefits of standardization and scale economies. On the other hand, craft production assumes a high level of individualization since the products are tailored to specific customer requirements. Product structure of customized products must be thoroughly adjusted for specific customization options by adopting entirely individual components that are specifically created besides of standardized and configurable modules. Generally, a fixed and a variable area of product structure can be identified, in which mandatory and optional spaces are foreseen for individual implementation. Product customization is usually supported by configuration systems. Generic conceptual procedures for designing such system are important for mass customization. These procedures involve analysis and redesign of the business processes, analysis and modeling of the company’s product portfolio, selection of configuration software, programming of the software, and implementation and further development of the configuration system . 1.3. Modularity from a Management Perspective In general, from a management perspective, modularity can be seen as a business strategy for efficient design and structuring of complex products, procedures and services with the objective to rationalize the enterprise. By now, modularity has become a basic irreplaceable development methodology inside the product strategy for a variety of technical products planning based on market research and correspondent forecast. Modularity seems counter-productive, when selective distinctive features are the reason to buy a product. When customers focus on elements, like styling, haptics, or specific colors, creative freedom is necessary. In such cases modular design is not applicable, because investments in modular design outweigh the efforts to create a user-specific product of which the number is often very small. The integration of different product variants does not come with any monetary benefits if it is not organized through a holistic controlling approach [5]. This approach enables the
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
417
assessment of modular product families as well as their holistic management based on the new modularity-balanced-score-card (M-BSC). Additionally, the different perspectives from production, development, marketing and sales need to be integrated. Cost schemes of modular products can also be established by decomposing the product family into generic modules to support cost calculation.
2. Modularity: Methods for Modular Design Modularity is achieved by partitioning information into three categories [3]: Architecture, Interfaces and Standards. Architecture specifies system modules and their functions. Interfaces describe the interaction of modules. Standards test a module's conformity to the design rules and compare performance of competing modules. Common attributes of modular products can be [4]: commonality of modules, combinability of modules, function binding, interface standardization, loose coupling of components. There are various methods to support modular design like axiomatic design (AD), functional modeling, design structure matrix (DSM), modular function deployment (MFD) and variant mode and effects analysis (VMEA), which can be also used in combination with an architecture development process [6]. Comparison of methods in several application areas (product variety, product generation and product lifecycle) have shown that the generation of modules depends on both the chosen method and the weighting of different criteria.
3. Modularity: Technologies and Tools for Modular Design Currently, manifold technologies and tools are offered to foster modular design. They provide optimal functionality by mutual integration and interaction with other systems. 3.1. Product Configurator A product configurator is a multi-functional, commercial IT tool which serves as interface between sales and delivery in an enterprise. It supports the product configuration process so that all design and configuration rules, as expressed in a product configuration model, are guaranteed to be satisfied. A product configurator implements formalized product logic, which contains all “If-Then” configuration rules and constraints. The customer inputs his detailed requirements controlled by the user interface. A product, which meets the customer’s requirements in the best way, is then selected. After validity check and cost analysis, the bill of material (BOM), CAD models, and finally, the bid are generated. By force of circumstance, as its function affects multiple core areas of an enterprise, a product configurator has to be integrated deeply with the involved IT systems such as Enterprise Resource Planning (ERP), Product Lifecycle Management (PLM), CAX technologies. However the complexity associated with managing and synchronizing configuration master data across different applications such as ERP, PLM and CAX is an important barrier to the deployment of integrated product configuration.
418
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
3.2. Agent-based Approach Collaboration and fuzziness are integral parts of configurable product modeling [7]. The agent paradigm can be applied to handle complex uncertain problems where global knowledge is inherently distributed and shared by a number of agents, with the aim to achieve a consensual solution in a collaborative way. Fuzzy agents are proposed to solve distributed fuzzy problems [8] as well as to model the processing of the fuzziness of information, fuzziness of knowledge, and fuzziness of interactions in collaborative and distributed design for configurations [7, 8]. Structural problems of configuration are also formalized with the help of configuration grammars [9] and implemented in a grammar-based multi-agent platform [10]. An agent-based system called FAPIC (Fuzzy Agents for Product Integrated Configuration) is developed for product configuration [11]. In FAPIC, each requirement, function, solution and process constraint is a fuzzy agent, with a degree of membership in each community of agents: requirement community, function community, solution community and constraint community. In the first phase, FAPIC builds different societies of fuzzy agents, necessary for the configuration of a product. In the second phase, the fuzzy set of consensual solution agents emerges. First the fuzzy set of requirements for a particular customer is defined. In third phase, the optimal configuration emerges from fuzzy consensual solution agents and their affinities. During this phase, the consensual solution agents through their interactions and using their affinities are structured into modules. Maximization of interactions between the consensual solution agents within a module and minimization of interactions of consensual solution agents in-between modules is the objective function to be optimized. Finally, in the fourth phase, the agents seek the consensus. Thus, consensus agents interact with fuzzy solution agents as well as with the fuzzy configuration agents. They can inform the designer about the different coefficients established to measure the consensus that emerged. 3.3. PDM Approach In modern PDM systems, the overall structure of a modular product is mapped in a generalized product structure. Alternative or optional items are initially managed in the database of PDM systems in the same way as all other items, i.e., items as master records with corresponding attributes. Differences to the usual article management arise only in the structuring of the product in the form of bills. Through the use of variants in product structures PDM systems are able to manage order neutral BOMs with varying and optional positions. This approach is beneficial for product development and less for production and accompanied departments because there explicit BOMs are needed for each product variant to be produced. Furthermore, there is a risk that the data management is very complicated, while compromising the performance of the system needs to be tolerated, especially when a large number of product variants needs to be managed. To resolve these conflicts, modern PDM systems are extended by the module "Variant Manager". In the base module all master data (parts, structures and processes) are managed. In case of variants explicit ones are derived by the configuration and clone modules. Various reports can be generated by a reporting module that also contains neutral data when needed.
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
419
4. Modularity: Industrial applications 4.1. Plant design In plant design, machines with more than 10.000 parts are designed which are documented in 3D-CAD and PDM systems. They are customized by the following criteria: market and customer requirements, technical producibility, own business aims and the general possibility to create modules. Thereby, both arbitrary complexity and the reduction of product offering have to be avoided. The right product configuration is generated by a Web based product configurator. Additionally, a convenient product presentation for given configuration is chosen by using KBE. Even for factory planning with more than 500 machines in one production hall and internet applications, complex models which show every detail cannot be used for performance reasons. Furthermore, no company wants to share its know-how with its competitors through the internet discovering fully detailed CAD models. The key is the separation of complex and simplified CAD models in two different data sets, which, though, are managed by the same status information. The simplified model can be generated in different levels of detail. As an example, the sales discusses the design of the machine hall with the client and configures the design in 3D on site. Thus, the characteristics of the individual machines are written down in the CAD system and the simplified models are used. A prime scale drawing can be printed locally and an offer is generated directly. Moreover, it is possible to add a rendered 3D picture to the offer.
Figure 1. Modular design with product configurator and KBE
In the example of Figure 1 the machine designed by KBE and CAD is able to adapt every of the 50 million possible combinations in the CAD model. According to the client´s choice, the desired variant is adjusted with the product configurator. This variant is checked for doublets at any level of the structure and checked automatically in the PDM system. The parts can be produced directly in the connected sheet bend machine. The variant selection can be conducted by an internet solution with the simplified model and ,hence, can be directly passed to order management. The processing time for one job is reduced from days to hours.
420
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
Configurator can communicate bidirectionally with other sections and internal systems (CRM, ERP, PDM) by detached status information from the CAD system. This solution allows building up bottom up relationship knowledge and setting up assembly plans by ERP object lists. Similar concepts, which combine product configuration with KBE are used for the design of automotive components. Several examples of configurable products have been studied in the literature such as: cars, elevators, computer equipment, computer software, telephone switching systems, telecommunication networks, etc. Automotive OEMs have their own history in the development of configuration technologies and tools. However, often it is found that neither a PLM, nor an ERP-oriented standard application is able to supply the needed functionality for a lifecycle approach to product configuration. PLM systems are product-centered tools, whereas ERP systems consist of operational business tools.. 4.2. Small-scaled modular design of aircraft wings [12] Fostering a differentiation between modularization approaches for conventional products and a new modularization approach for large-scaled products like airplane, the term small-scaled modular design is introduced which describes the possibility to subdivide large components. A methodical approach to determine the ideal module size for large scaled products was developed which is divided into four steps: design, technical feasibility evaluation, analysis of the economic viability, and development of a tool to determine the ideal module size. The wing of a long-range, wide-body jet airline was selected as a reference product. To identify the best design concept, four variants were evaluated on the basis of the following criteria: manufacturability of the components, easy assembly with a high proportion of preassembly, the use of state of the art technologies, the estimated weight and the comparability to the reference wing. Finally, variant 4 was chosen, especially because this variant shows a high level of comparability to the reference wing and is based on currently available production technologies. Then the analysis of the application example aircraft yielded that even at the area of the wingtip there is enough installation space to mechanically connect the trailing edge modules with each other. In contrast to the trailing edge, the position with the least installation space for the assembly of the leading edge is not the wing tip. As the pipes of the bleed air system and the generator cables do not lead from the wing tip to the root, the position with the least installation space is just behind the engine mount. In this exemplary assembly situation several hydraulic pipes, a bleed air pipe, several parts of the electric harness and a mechanical drive shaft for the slat system have to be connected with each other. The connection is done via an open segment of the leading edge cover which will be closed after the assembly process is finished. For a substantial assessment of a small-scaled modular design of an airplane wing the whole aircraft life cycle has to be considered. By an analysis of the whole lifecycle of an aircraft and expert interviews, six groups of ‘modularization factors’ were identified (see Figure 2 left). Further modularization factors with a lower impact to the lifecycle costs, like ergonomics, the feasibility of retrofitting and a larger product variety, and recycling could be identified, but are not considered. Through this focusing on the most significant modularization factors an application of the developed method in the preliminary design phase is facilitated. The modularization factors include contradictory design targets (Figure 2).
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
421
Figure 2. Lifecycle costs as a function of the module size
To facilitate the manufacturing and the logistics, a small module size and, thus, a high number of modules should be realized. There are further modularization factors to reduce the number of modules. For example, fuel consumption raises with an increasing number of modules as each interface between two modules causes additional weight and aerodynamic drag. Thus, the ideal module size was determined based on the predicted lifecycle costs. By minimizing the total life cycle costs not only one design aspect is optimized, but a global optimum is reached.
5. Modularity: Further Development Design for product variety, design for product configuration, and design for mass customization are considered to be highly collaborative and distributed processes. During these design processes, the amount of information on the products evolves. Uncertainty is thus another characteristic of designs processes for product variety, for product configuration, and for mass customization. Therefore, from a holistic point of view, there is still much to be desired in order to achieve system-wide solutions for these design processes and platform-based product development, which can consider collaboration and distribution, intensive interaction between distributed actors, heterogeneity, dynamics and evolution of organization, and the uncertainty. Product configuration and modularity are inherently related to product architecture. As the product architecture is considered to be the governing force in lifecycle design, the issue of product architecture lacks theoretical foundation. The design of product architecture has been considered rather more as a know-how issue of architects than a scientific-engineering issue. In what ways a product architecture, accounting nowadays only for the functional and physical aspects of a modular product, integrate all other lifecycle characteristics is an important issue. The design of a modular product is considered to resolve a system-based interdependency problem. Traditionally, this issue has been seen as system architect’s task. Architects design a functional and physical architecture of a system and their greatest concerns are still with the systems’ connections and interfaces. The development of modular designs often requires a redesign of the components
422
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
themselves resulting in new components. Consequently, an architect should assess the achievable technical performance of systems based on their underlying modular or integral architecture. Modular design should be the result of a coherent and rationale design process, where the options, modular or integral, are early explored in response to technical constraints and the set of requirements. Finding the relationship between sparseness, modularity, technical constraints and the set of requirements, could allow such assessment early in design process. A task in modularity assessment is also the issue of increasing the effectiveness of modularity. Finding the relationship between the level modularity and the effectiveness of modularity is an open-ended issue. Actually, the lifecycle of a module is confined to predefined scenarios that depend on its interfaces and its connections. A product with increased adaptability and suitability requires more efforts of design and manufacturing due to increased variety and complexity. How to design intelligent modules is an important issue related to the design of intelligent products. The use of open architecture in modular design is a solution to allow the adoption of new technology. The use of existing modules as well as the use of independently developed modules to design new modular systems, while respecting the integrity of these modules, has to do with the suitability for integration of modules. The adaptability and suitability of modules for integration in a wide range of possibly larger systems is an important issue of the design and development of intelligent systems. The concept of an intelligent product should maximize the design space of architects and system designers. The change management of requirements, functions, solutions and process constraints is another question in modular design. The development of intelligent modular products is strongly related to the development of intelligent models and intelligent tools. Thus, development of intelligent multidisciplinary collaborative and distributed platforms can better handle the modularity and variant management problem. The multi-agent paradigm has the potential to respond to this challenge and to pave the way for the introduction of innovative technologies in a dynamic environment characterized by important changes and evolution. Development of intelligent models and intelligent tools on the one hand and the development of intelligent modular products, on the other, which can communicate and cooperate between them, need holistic and concurrent engineering approaches. These approaches can offer the possibility of the design of self-sustainable models and selfsustainable products. To create long-lived modular systems, the foundations of the system have to reflect the corresponding relevant reality. The design of a modular product should exploit this principle thoroughly. More modularity is better in all lifecycle viewpoints. However, except architects, other actors like development project team members and management in general have often limited access to dependency-based system views. Transfer and sharing of knowledge, from architect to various actors and vice-versa, are essential to be able to support all lifecycle viewpoints in system level project coordination. If collaborative design in this context is to be successful, it must be built on a shared rationale of critical design decisions. A key motivation of modularity is the specialization in the design and production of modules. Modular organizations are responsible for modular products. The modular product effectively serves much larger user groups over longer periods of time than a single combined product. Thus the performance of the structure of modular product reflects the performance of actors’ coordination in an organization. Should a modular organization in a dynamic world reflect the modularity of the product, and, should a
E. Ostrosi et al. / Modularity: New Trends for Product Platform Strategy Support
423
modular product reflect the modular organization, are still open questions. Thus, finding the relationship between the performance of the structure of modular product and the performance of coordination of an organization could allow the assessment of modular product design early in design process.
6. Conclusions Modularity is a multidisciplinary and intersecting concept. In the context of concurrent engineering methods, modularity can be defined as the degree to which a product’s architecture is composed of modules to respond to a set of requirements, including lifecycle issues and the organization of collaborative and distributed design processes. The current trend of technologies of modular design is to use, combine and integrate different technologies such as advanced CAD systems, product configurators, agent based systems and PDM systems. Development of intelligent models and intelligent tools as well as the development of intelligent modular products (i.e. intelligent system: model-tool-product), which can communicate and cooperate, demands the design of more intelligent organizations of designs processes for product variety, for product configuration, and for mass customization. Development of intelligent model-toolproduct systems needs the development of holistic and concurrent engineering approaches. These approaches can offer the possibility of the design of intelligent selfsustainable models and intelligent self-sustainable products.
References [1] J.A. Fodor, The Modularity of Mind, MIT Press, Cambridge, 1983. [2] C.Y. Baldwin, K.B. Clark, Modularity in the Design of Complex Engineering Systems. In: Braha D, Minai AA, Bar-Yam Y (eds.), Complex Engineered Systems - Science Meets Technology, SpringerVerlag, Berlin Heidelberg, 175 - 205, 2006. [3] S. K. Ong, Q.L. Xu, A.Y.C. Nee, Design Reuse in Product Development Modeling, Analysis and Optimization, World Scientific Publishing, Singapore, 2008. [4] F.T. Piller, M.M. Tseng, Handbook of Research in Mass Customization and Personalization, World Scientific Publishing, Singapore, 2010. [5] M. Jung, Controlling modularer Produktfamilien in der Automobilindustrie, Deutscher Universitätsverlag, Wiesbaden, 2005. [6] G. Schuh, J. Arnoscht, S. Aleksic, Systematische Gestaltung von Kommunalitäten in Produkten und Prozessen, ZFW, 107(5), (2012), 322 - 326. [7] E. Ostrosi, A.-J. Fougères, M. Ferney, Fuzzy Agents for Product Configuration in collaborative and distributed design Process, Applied Soft Computing, 12(8), (2012), 2091-2105. [8] E. Ostrosi, A.-J. Fougères, Optimization of product configuration assisted by fuzzy agents, International Journal on Interactive Design and Manufacturing, 5(1), (2011), 29-44. [9] E. Ostrosi, L. Haxhiaj, M. Ferney, Configuration Grammars: Powerful Tools for Product Modelling in CAD Systems. In: Curran R et al (eds.) Collaborative Product and Service Life Cycle Management for a Sustainable World, Proceedings of the 15th ISPE International Conference on Concurrent Engineering (CE 2008), Springer-Verlag, London, 451-459, 2008. [10] E. Ostrosi, A.-J. Fougères, M. Ferney, D. Klein, A fuzzy configuration multi-agent approach for product family modelling in conceptual design, J Intell Manuf, 23(6), (2012), 2565-2586. [11] A.-J. Fougères, E. Ostrosi, Fuzzy agent-based approach for consensual design synthesis in product configuration, Int Computer-Aided Engineering, 20, (2013) 259–274. [12] Overmeyer L, Bentlage A (2014) Small-Scaled Modular Design for Aircraft Wings. In: B. Denkena (ed.), New Production Technologies in Aerospace Industry, Lecture Notes in Production Engineering, DOI: 10.1007/978-3-319-01964-2_8, Springer International Publishing Switzerland 2014, pp 55-62
424
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-424
Managing Fluctuating Requirements by Platforms Defined in the Interface Between Technology and Product Development Samuel ANDRÈa,1 , Roland STOLTa, Fredrik ELGHa, Joel JOHANSSONa and Morteza POORKIANYa a School of Engineering, Jönköping University, Sweden Abstract. Product platforms play an important role for the efficient customization and variant forming of products in many companies. In this paper four different companies ranging from OEM to B2B suppliers have been interviewed on how they engage in technology and product development, create and maintain product platforms and how they respond to the changing requirements on the platforms and on the products and product families derived from them. The objective is to find how product platforms are used to meet the demands of efficient product customization. The companies all have identifiable product platforms and established processes for product development. However, there are differences in how they define technology development, how the platforms are created, maintained, replaced and what the platforms contain. The introduction of new technology into the platforms and how the platforms are used from a Lean product development perspective has been of interest in the survey as reported in the paper. Keywords. Product platform, Product development, Technology development, Customisation, Requirements
Introduction Companies in industry can gain competitive edge by continuously and systematically investing in technology development in strategic areas. This is however a challenge for sub-suppliers due to the large difference between the various systems their products are to be integrated into, the markets the product are intended for, the use of the product and the customer’s individual preferences. This paper is part of a project aiming at understanding these challenges and further on developing new methods for increasing the ability to efficiently develop and describe adaptive technology solutions in order to meet changing and conflicting requirements. By interviewing 2-4 persons, ranging from designer to manager, in four companies, the company development strategies are investigated. The companies are system suppliers within the production system, automotive and aerospace industry and they all manufacture and integrate technology into their products, when at the same time being affected by fluctuating requirements. In past research a large emphasis has been put on splitting technology development (TD) from product development (PD) in order to decrease risk [1]. This splitting of TD and PD is important due to the difference in prerequisites, technical maturity, time 1
Corresponding Author.
S. Andrè et al. / Managing Fluctuating Requirements
425
horizon, need for competence and deliverables from the two processes. However, challenges in how to manage the interfaces between the two emerges [2]. Technology development aims at developing knowledge, skills and artefacts in order to enable product development [3]. Deliverables can also be in the form of demonstrated feasibility [2] or a technological platform [4]. It is further described in [4] that TD is important for a company’s long-term growth but is however often down prioritized and represents a small portion of the total effort of a company. TD is in need of both structure and flexibility due to the uncertain and often complex nature. This is described as mechanistic (formal) and organic (informal) methods in [5]. In [3] the use of a technology platform is recommended for a company in the low volume, high technology segment. The technology platform consists of design concepts and methods on an abstract level since a component based platform is not applicable for such a company. Product development is defined as transforming a market opportunity to meet the need of a customer and strategic goals of the company. This is done through a set of coherent activities that interacts with each other [6]. A strategy in PD that has shown to be effective is the product platform approach. A product platform can be defined on different levels ranging from fully specified component based to more abstract descriptions containing knowledge, people and relationships. Two platform approaches is proposed in [7]: (1) The module based or configurable platform, is supported by a well-planned product architecture where modules can be assembled into a finished product and in that manner create a product family; (2) The scalable or parametric platform allows for a certain amount of shrinking and stretching of design parameters in order to create a product family. Four types of variant specification systems are identified in [8]: (1) Engineer to order, (2) modify to order, (3) configure to order and (4) select to order. The types can be seen as levels where denoting the amount of engineering needed to deliver a finished product, engineering to order being the highest level. A risk in platform architecture is the trade-off between increased development efforts related to the initial platform and the uncertainty whether the right platform is chosen in order to develop a sufficient number of product variants to gain back the extra expenses [9]. Another trade-off when choosing the product platform approach is the one between commonality and performance [10]. However, a proposal for a knowledge platform is given in [11] and the author also states that the question is not whether to invest in a platform, but how to design it. The knowledge platform is built by reusable knowledge that has been gained through the different development processes. Another view that aims towards making the PD process more effective is Lean product development (LPD). The true origin of LPD is debated [6, 12] but most authors couple the theory to the way that Toyota develops automobiles [13-15]. A central term used within the LPD-context which is emphasized by the research community, is value. It is crucial to only engage in activities that adds customer value, other activities are waste [14]. Other important factors related to LPD are: front-loading the PD and work with several solutions while keeping the design space as large as possible (Set-Based Concurrent Engineering); use of a chief engineer and continuous improvement. The knowledge value stream, aiming at capturing and reuse knowledge regarding markets, customers, technologies, product and manufacturing capabilities, is central in LPD [15]. The author continues to emphasize the importance of generalizing knowledge in order to make it flow between projects such that knowledge in an organization increases the value.
426
S. Andrè et al. / Managing Fluctuating Requirements
In Figure 1 a simplified process chart is shown, illustrating TD, PD and the interface that appears between them. The interface between the two needs to be managed in order to couple TD and PD. E.g. Nobelius [16] emphasizes three dimensions related to the integration of TP and PD: Strategic and operational synchronization, transfer scope and transfer management. According to [1] insufficient support for effective interface management is given in literature. The author also states that the transfer of TD to PD must take place in a physical hand over and an understanding of each other’s work must be developed. TechnologyDevelopment
ProductDevelopment Customised products
Needsand opportunities Figure 1 The process from market need and opportunities to customized product
The requirement specification aims at describing product functions and constraints in the PD process as well as giving a unified impression to all stakeholders involved in the project [17]. The dynamic nature of requirements often result in changes or that new requirements are added or others are dropped. There are two main views of requirements in the design process: x It is desirable to form a fixed list of requirements to guide the PD process [18]. x Late decision-making as well as late forming of the requirements (keep a large design space) is desirable since such strategy leads to a steady convergence. This is one of the key elements of Set-Based Concurrent Engineering (SBCE) [14, 15]. It is proposed that a development strategy where TD is separated from PD reduces risk for a company. It is also proposed that LPD and platform strategies can be adopted and increase efficiency. However, flexibility needs to be integrated with the proposed methods in order to effectively handle fluctuating requirements. Thus, the research question of this paper can be summarized as follows: How do the studied companies define, use and adapt their product platforms in order to enable effective technology and product development as well as remaining flexible to the changes of requirements?
1. Case study In this study four companies have been interviewed in order to receive their qualitative view on how they engage in TD, PD, the interface between TD and PD, and how they handle fluctuating requirements. The four areas have been chosen since they are of importance for the companies in the study as well as the project that this paper is a part of. The interviews have been conducted at the companies using the same questionnaire in all interviews. Two to four persons, on different levels in the organization, have been interviewed at each company. The questions have been asked in an opened manner and the answers can be viewed at as qualitative. The answers to the questions are summarized in the subsequent sections. Company 1 (C1) employs approximately 150 people totally in Sweden. However, the investigated organization employs approximately 70 people and act in the production system and product development industry. The studied organization of Company 2 (C2) employs approximately 300 people and act in the automotive industry as a sub-supplier. The company employs approximately 3000 people worldwide.
S. Andrè et al. / Managing Fluctuating Requirements
427
Approximately 600 people are active within the investigated organization of Company 3 (C3). The company act in the automotive industry as a sub-supplier and employs 10 000 people worldwide. Company 4 (C4) acts as a sub-supplier in aerospace industry. The studied organization employs approximately 2000 people and is part of a multinational company employing approximately 44 000 people. 1.1. Technology development C1: The company does not separate TD from PD. However, the way that C1 handles new technology is by being up to date with manufacturing and material technology as well as being aware of subcontractor new abilities. By keeping up to date on new technologies C1 can incorporate them in new products. However, the company does not provide any document or guideline supporting TD except for a preferred supplier document. C2: In C2 a process for TD exists. When starting a TD project there is already a target product (with a specified dead-line, the deadline of the upcoming connected PDproject). C2 aims at having more general technology development without targeted products. 10% of the product development team are dedicated to technology development that are used in the different TD projects and that have frequent contact with inventors, toolmakers and industrial designers. The TD process model is a light version of the PD process, containing only three gates instead of five. TD projects are initialized by product managers considering bypassing of patents, materials, customer needs and market. It is important for the company that the employees continually update their competence and that they know what happens on the market. C3: TD at the company can be based on individual ideas, market or even suggestions from production. The company promotes new ideas and provides support for patenting. Moreover, visiting exhibitions or bench marking can give insight for initiating a TD project. There are a dedicated team that works exclusively with TD. The marketing department perceives signals (from e.g. conversations with customers) about what will be required of future products and put them into a roadmap for the future product portfolio. The road map for renewal of product portfolio and strategic initiatives is issued by the steering committee that also make go or kill decisions in the gates/meetings. C4: The company has two time frames (short time and medium time) for TD projects. The company applies TD into method engineering, process engineering and integration engineering. TD ends up in the company technology platform that aims at creating a product toolbox (product platform). Usually, TD is planned prior to order to investigate the benefits for the company and ability to answer to future quotations. Usually, a specific product is considered when applying TD. TD can be either corresponding to business units’ requirements, or providing solutions for gaps and deficiencies in manufacturing methods. A dedicated team works exclusively with TD. 1.2. Product development C1: The PD processes for C1 is similar to C2 with a few differences. Market research is not part of the process. The concept phase is divided in idea generation and concept
428
S. Andrè et al. / Managing Fluctuating Requirements
development. No formal gates exists except for acceptance between concept development and PD. C1 sees themselves as a service company that can offer PD as well as customized machines. In this way they become an “all inclusive” company that can offer product and customized production system. The product platform is made out of the knowhow and the modules and components used for manufacturing the customized machines. The PD deliverable is a requirement specification for the machine developer and manufacture, all within the same company. Experience is passed on through different documents (e.g. CAD). When problems or questions arise, the customer can often get in contact directly with the designers. This supplies the designers with first-hand information from the customer. C2: To support the PD process, C2 has developed four different project types. The company has one full featured five gate development process consisting of the following phases: Market research, concept phase, design phase, production engineering and production. The other project types are light versions of the full featured one. C2 develop their product platforms as general as possible. It is very important for C2 to make sure the platforms are scalable, ensuring they can fit in products with different customer segments. Platforms at C2 are realized through modules and shared components and they try to keep the number of variants as low as possible. C2 has policies for capturing the experiences in several documents, but still some information is lost. Therefore it is emphasized by the company that new engineers works and learns from more experienced ones. A project documentation is used called “lessons learned” aiming to store the main issues that have been come into sight during the project such as problems, obstacles, tricks, and issues that should be avoided. C3: The product development process at C3 is initiated by an inquiry from the customer. The Knowledge Owner determines if the technology exists, analyses requirements and searches for already available technology (platform) to solve the problem. One or several proposals are then given to the customer in different price ranges. At C3 the PD process aims at front-loading the projects. The development model is a gate model containing four phases much similar to C2. Market research and concept development however lies within the first phase. The PD process aims at including a reflective moment where Knowledge Briefs are developed carrying lessons learned, problems and inventions. This document can also contain descriptions ranging from platform to component. The Knowledge Briefs are stored in a tree-like structure in which the Knowledge Owner can browse for information. In order to streamline the handling of the platform there is a set of basic components in the platform that can be adapted to the case at hand. Efficiency of the platforms are also achieved by dedicating one person per platform, the Knowledge Owner, who answers questions from market concerning the platforms ability to adapt to customer specifications. To be able to answer such questions e.g. trade off curves originating from the Knowledge Briefs are used. The trade-off curves show the relations between important design parameters and support the Knowledge Owner in determining the validity of the platform. A finished platform design is controlled by the top assembly with underlying article drawings in the PDM environment.
S. Andrè et al. / Managing Fluctuating Requirements
429
C4: The company uses a gated process for PD with two main parts: Plan product (develop new technology or product) and go through with planning. The phases are much like the ones of C2 and is well described in the company organizational system. The company has 3 different platforms: A technology platform, product platform and manufacturing platform, which are all developed to a large extent. C4 views a platform as an explanation model that contains a set of rules and standardized methods used in the development process. The platform is most useful for the company when something done earlier is reused and is then used as a start point in the next project. The platform is unlike the one of the car industry (C2 and C3), having no versions, are not made out by physical components and is continuously evolving. The company uses a wiki that is editable by anyone in the company. This facilitates spreading and storage of knowledge and experiences. 1.3. The interface between TD and PD C1: The company has no documented strategy for technology development except for keeping up to date with sub-suppliers, and therefore no interface is identified. However, when the company accepts projects involving new technology it is integrated in the PD project directly without differentiating the two activities. When a sub-supplier’s technology is used in a product, the technology have been tested. The sub-suppliers are presumed to have fully validated their technology. C2: One aim of the TD project is to clarify when a technology is ready to be implemented. The result is embodied in a prototype, process and tooling information and cost estimations. Advantages and disadvantages coupled to the new technology is handed over to PD as well. Other deliverables from TD can be in the form of CADmodels, trade-off curves, guidelines, “lessons learned”, product structures and text documents. It is not always clear to the employees who is the technology developer and who is the product developer. C3: A new technology shall be verified by design verification (DV) in order to meet the sufficient readiness level and in order to be implemented in customer projects without major risk. In addition, FMEA is used to declare the risks. The decisions are made in gate meetings. The criteria for DV can sometimes be hard to decide since the technology and sometimes area of application is new. The result from TD is described by drawings and test results and is embodied in a prototype. When TD is done, the result is handed over to PD together with an open issue list on how to best succeed with the PD project. C4: A new technology shall be verified by reaching the 6th level on the NASAdeveloped TRL scale [19]. The company has well specified criteria that should be met for a technology to be approved. Examples of deliverables from the TD is property models, trade-offs, guidelines, processes, lessons learned and instructions. A company developed wiki and collaborations with universities is also of used for spreading and preserving of knowledge. 1.4. Managing fluctuating and conflicting requirements C1: During PD, C1 emphasizes that they view the requirement specification as a living document that tends to change. The company work with sets of solutions in parallel and they do not see vague requirement specifications as a problem, rather an advantage. As
430
S. Andrè et al. / Managing Fluctuating Requirements
in Set-Based Concurrent Engineering the company likes to keep a large design space in order to learn about the product and then get a steady convergence of solutions. Sometimes the requirements can be conflicting e.g. weight/stiffness or number of units/price. They use a system where they divide the requirements into three groups. Desirable, must and no requirement. The balancing of demands is a challenge. The project manager is in charge of the requirements and is responsible for questioning them in an early stage. Sometimes a requirement can be planned to change in the future. E.g. a desired requirement can be planned to change to a must later in the project. Moreover, if there is no requirements, C1 will specify what to deliver. When the product is handed over to the machine manufacturer, within the company, the requirements are frozen. Until then the machine developer has had the possibility to affect the requirements in an early stage. C2: The most common sources of requirements changes come from market, customers, law and environment (in falling order). C2 keeps track of the standards and how they will change in the near future. They have employees that are part of standardcommittees in order to influence on them. Subjective requirements are turned into objective and quantifiable requirements during TD, prior to PD. To solve conflicting requirements C2 use group meetings to set priorities together with project managers but most often the project team handles the situation right away. It is very often a matter of cost, quality and time. Sometimes the project is re-started and sometimes a parallel TD project is started and resources are then isolated to solve the problem. C3: The most common changes of requirements apply to performance, cost and environmental requirements. It is more common that requirements are added than removed, in such case a negotiation about the cost takes place with the customer. To avoid conflicting requirements C3 uses a requirement verification sheet (RVS) at the beginning of the product development projects. The RVS are intended to identify conflicting requirements in an early stage and to judge the validity of the requirements. It is most common that changes of requirements occurs at the beginning of the projects, and when test data is available later in the projects. Some requirements are subjective, involving feelings, and will be set late, sometimes not until there is a prototype. C4: Early in the development project C4 receives a list with requirements from the customer that are not ranked. C4 sees a problem and emphasizes that balanced requirements are important to not cause loops in the development process. The company is very aware that changes in requirements will occur, but as long as a robust design is created it is not a problem. C4 tries to foresee what requirements will change by looking at requirement history from earlier projects and adapt the designs to handle changes in a good way. A source of changing the requirements is the customer of the customer often affecting temperatures, loads, adjacent systems and interfaces. It is also common that requirements are added.
2. Discussion This study has investigated how four companies engage in TD, PD, the interface between TD and PD, and how they handle fluctuations in requirement. The result show that there are clear differences in the company strategies.
S. Andrè et al. / Managing Fluctuating Requirements
431
Firstly, the definition of technology differs between the companies. When starting a TD project in C2, there is already an end date for the PD project, i.e. the TD project has a fixed time frame. C3 on the other hand has a vaguer time frame and no customer for the TD project. Instead the company tries to foresee trends and customer need in order to develop products for the future. C1 makes no difference in TD and PD. If new technology is needed it is integrated in the PD process or bought from a sub-supplier. C4 has the most elaborate view of technology among the four companies. A technology platform is the base of the TD and consists of methods for developing products. A similarity among the companies is that there is always a product in mind when starting a TD project. It can be seen from the case study that the companies integrates TD in PD to different extents, e.g. C4 totally separates TD from PD resulting in low risk for the PD projects. This is an important strategy for C4 since the number of system interface and product complexity is high. This strategy might however increase the technology’s time to market. In this way C4 is more aligned with the theories of [1, 16]. Opposite to C4 is C1 that integrates TD in PD. This increases the risk in PD projects, however the time to market for the developed technology is decreased. An explanation for why it is possible for C1 to integrate TD in PD is a lower number of interacting systems and interfaces. The companies all have well defined and similar processes for PD. A common strategy in the companies is to use a platform approach to some extent. In order to make use of a platform approach, the platform can be defined on different abstraction levels. In Figure 2 the companies are roughly placed according to the abstraction level of their product platform. The types of variant specifications, according to [8], are also coupled to the abstraction level of the platform. It should be noted that each company
Figure 2 Categorisation of platform abstraction level in the studied companies
has characteristics that can be coupled to several of the variant specifications types, however in Figure 2 they are placed on a level resembling them the most. Company factors that increases the need of a higher platform abstraction level is: small production volumes, high product customization and high product complexity. Then it follows that the higher the abstraction level of the platform, the more engineering needs to be done to deliver a product. Moreover, the higher the abstraction level of the platform, the more the companies tend to describe it as an explanation model rather than made up by physical components. According to [9, 10] there are trade-offs to be considered when engaging in platform development. Two of the companies have engaged in a component based platform, C2 and C3, these trade-offs were however not mentioned by the studied companies. Instead platform development is something that is considered to be a tacit strategy. C4 has a platform description on a more abstract level described in literature by [11].
432
S. Andrè et al. / Managing Fluctuating Requirements
The interface between TD and PD differs between the companies, mostly due to the difference in integration between TD and PD. The difference in integration results in different deliverables from TD. E.g. the TD at C2 and C3 aims at realising the new technology in a physical prototype and documents, when C4 aims at describing new methods and instructions. C1 integrates TD in PD and therefore an interface is not identified. All companies in the study are faced with the challenge that requirements change. However, the view of requirements differ between the companies, e.g. C4 uses the term requirement freeze and strives to fix the requirements early. C1 has a more dynamic view of the requirements and sees a large design space as an advantage. That way of working, including developing sets of solutions, enables a steady convergence and ends up in building knowledge of the design, according to the company. C4 is the only one of the four companies that mentions having a strategy for using a robust design in order to withstand changes in requirements. For the project that this paper is a part of Figure 3 is proposed as a model for the interface between TD and PD. The figure describes how TD is separated from PD and how the deliverables from TD builds up a product platform where the technology is effectively described and can be adapted to fit the different PD projects. The PD can then use the platform for creating customised variants. The platform also contains the maintained knowledge that is continuously developed in the company projects. The efficiency of the proposed platform is coupled to its ability to adapt, how well it can handle fluctuating requirements and how effectively variants can be created from it.
Figure 3 Proposed model of the interface between TD and PD
3. Conclusions and future work This paper has investigated how four companies define, use and adapt their product platforms in order to enable effective technology and product development as well as being flexible to fluctuating requirements. The results show that the definitions of the platforms differ between the companies and that the differences can be related to the type of products the companies develop. A need for strategies on how to be flexible
S. Andrè et al. / Managing Fluctuating Requirements
433
when engaging in product and technology development is identified. This would likely improve the companies’ ability to manage fluctuating requirements. This paper is part of an externally financed project running over 3 years. The future work will consist of a continued and deeper investigation of the companies in the study. Cases will be identified and generalized methods will be developed in order to support TD and PD as well as being flexible to fluctuating requirements.
4. Acknowledgments We would like to express our gratitude to the Swedish Agency for Innovation Systems (VINNOVA) that has partly financed this research. The authors would also like to thank the companies involved in the study for dedicating invaluable time and resources.
References [1]
[2] [3] [4] [5] [6] [7] [8]
[9] [10] [11]
[12] [13] [14] [15]
[16] [17] [18] [19]
N. Lakemond, Johansson, G., Magnusson, T., Säfsten, K., Interface between technology development, product development and production - Critical factors and a conceptual model, International Journal of Technology Intelligence and Planning, (2007), 317-330. D. Nobelius, Managing R&D Processes - Focusing on Technology Development, Product Development and their Interplay, Chalmers University of Technology, Göteborg, 2002. U. Högman, Processes and Platforms Aligned with Technology Development - The perspective of a supplier in the Aerospace Industry, Chalmers tekniska högskola, Göteborg, 2011. R.G. Cooper, Managing technology development projects, Research Technology Management; Nov/Dec, (2006), 23-31. D. Olausson, C. Berggren, Managing uncertain, complex product development in high-tech firms: in search of controlled flexibillity, R&D Management 40, (2010), 383-399. H.C.M. León and J.A. Farris, Lean Product Development Research: Current State and Future Directions, Engineering Management Journal, 23(2011). T.W. Simpson, Z. Siddique, J. Jiao, Product platform and product family design - Methods and application, Springer science+Business media, inc, New York, 2006, 3-10. B.L. Hansen, Development of industrial variant specification systems. 2003, Technical University of DenmarkDanmarks Tekniske Universitet, Department of Management EngineeringInstitut for Planlægning, Innovation og Ledelse. J.I.M. Halman, A. P. Hofer, W. V. Vuuren, Platform-driven development of product families Linking theory with practice, Journal of product innovation management, (2003), 149-162. J.A. Martinez-Larrosa, Z. Siddique, CAD support for product family design using parametrics, mating relationships, and modularity, Advances in Concurrent Engineering, (2002), 535-543. H. Johannesson, Emphasizing reuse of generic assets through integrated product and production system development platforms, Advances in product family and product platform design: Methods & application, (2014), 119-146. M.S. Khan, et al., Towards lean product and process development, Internarional Journal of Computer Intergrated Manufacturing, (2013), 11. A.C. Ward, Lean Product and Process Development, Lean Enterprise Institute, Cambridge, 2007. J.M.a.J.K.L. Morgan, The Toyota Product Development System, NY: Productivity Press, New York, 2006. M. Kennedy, K. Harmon and E. Minnock, Ready, Set, Dominate: Implement Toyota's Set-Based Learning for Developing Products and Nobody Can Catch You!, Va: Oaklea Press, Richmond, 2008. D. Nobelius, Linking product development to applied research: transfer experiences from an automotive company, Technovation, (2004), 321-334. G. Pahl and W. Beitz, Engineering design: a systematic approach, NASA STI/Recon Technical Report A, 89(1988), 47350. K. Sutinen, L. Almefelt, and J. Malmqvist. Implementation of requirements traceability in systems engineering tools. in Proceedings of Product Models 2000, Linköping, Sweden. 2000. J.C. Mankins, Technology readiness levels, White Paper, April, 6(1995).
434
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-434
Intelligent Engineering Design of Complex City: a Co-evolution Model Bin HEa , Egon OSTROSI b,1, Fabien PFAENDERd,e , Alain-Jérôme FOUGÈRESb,c, Denis CHOULIERb, Bruno BACHIMONTd and MonZen TZENe a School of Mechatronic Engineering and Automation, Shanghai University, Shanghai Key Laboratory of Intelligent Manufacturing and Robotics,200072, P.R.China e-mail:
[email protected] b Laboratoire IRTES-M3M, Université de Technologie de Belfort-Montbéliard, France e-mail: {egon.ostrosi, alain-jerome.fougeres, denis.choulier}@utbm.fr c ESTA, School of Business and Engineering, Belfort, France d Université de Technologie de Compiègne, France e-mail:
[email protected] e UTSEUS, ComplexCity Laboratory, Shanghai University, P.R.China e-mail:
[email protected];
[email protected]
Abstract. Engineering design and planning of the city is trans-disciplinary complex problem. City can be considered an evolving living body in complex interaction with its citizens, its artificial physical environment, and its natural physical environment. City is a multi-physic, multi-agent, multi-stratified and multi-scale object. City is also an intersecting object. It shares some of properties of two kinds of objects: empirical objects as well as theoretical objects. Based on these properties, this paper proposes a model of intelligent engineering design of a complex city. The space of problem is called Citizen Problem Space. The Citizen Problem Space is bridged to Functional Problem Space which is formulated in response to the citizen problem. The functional problem is reformulated also in response to intermediate solutions, and co-evolves with the design solutions. Design solutions belong to the Solution Space. Process Space also interacts with Solution Space. Thus the design solutions can only be dynamical consensual: satisfying both functional problem and process problem. This model depicts an evolutionary system composed of four evolutionary spaces. The evolution of each space is guided by the most recent population in the other space. It is a coevolution model. It provides the basis for a multi-agent computational model of engineering design of the city bridged to the citizen big data extraction. It produces a multi-scale city with a holonic structure. Keywords. City design, multi-scale design, holonic design, fuzzy agents.
Introduction The continuing unidirectional transformation of society into mass urban-industrial society shows that this process is irreversible [1]. The rapid urbanization of the world and the increasing complexity of urban systems urge to study the cities by transversal engineering sciences approaches. Climate change, urban sprawl, densification of 1
Corresponding Author.
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
435
present cities, as well as the inevitably conflicting demands for energy saving, better mobility, better information, environmental protection, have transformed the city into a socio-physical engineering problem of major concern [1, 2]. Today, city remains a rather unknown and poorly predictable object. City offers exceptional scope for original engineering research and applications, with critical impact for the well ̺ being of its citizens. Developing better strategies for the engineering design of the complex cities can be considered a global imperative [3]. The goal is to propose new models and tools either to lead to a better design and planning of cities or to give a better predictive approach for a better decision-making process. Coping with complex city in its conceptualization, in its modeling, in building up theoretical and practical new fundamentals, models and tools, implies also to enlighten the engineering modeling knowledge in the design of complex city generally hidden in the frameworks that can give birth to the these fundamentals, methods and tools [4]. Designers can engage the current challenge of complex city design applying also the “the long-term ability of a system to reproduce” criterion [1]. City design solutions should be multi-scale and consensual to be accepted [5]. The lack of data flow-driven methods for design and planning of cities, integrating and resolving multidimensional conflicts for finding creative engineering design solutions, can be identified as a research engineering problem. The multi scale design can presumably stimulate greater environmental awareness. It is believed that citizens as well as policymakers of smallscale, self-sufficient [1], self-reliance [6], self-integrative [7] regions will be aware of the causes and effects of their environmental actions. This paper, first, analyzes the key properties of the city and then, it proposes an evolution model for the engineering design of multi-scale and holonic city.
1. Key properties of the city City is a complex object. Engineering design and planning of the city is also a complex problem. This complexity results from the conjugation of a huge amount of heterogeneous data interacting with each other. Three key properties of the city can be drawn: Property 1: City can be considered an evolving living body in complex interaction with its citizens, its artificial physical environment, and its natural physical environment. Indeed, city is a living complex geometrical and topological object, limited by its artificial physical environment, and its natural physical environment. It is lived by its citizens and therefore is constrained by sociological, societal, political and economical parameters. Property 2: City is a multi ̺ physic, multi ̺ agent, multi-stratified and multi ̺ scale object. Indeed, city is a multi-physic object because it is characterized by multi-flux of energy, materials, information and human activities behavior. City is a multi-agent object because it is formed by the populations (of citizens) and different actors of the urban scenery. City is a multi-stratified object from historical, institutional and cultural context of its long evolution. Cities is a multi-scale object because it is an whole object that is part of a vaster whole, and which at the same time contains elements, of which it
436
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
is composed and which provide its structural and functional meaning, interconnected by networks as well as characterized by social, cultural, political and economic aspect. Property 3: City is also an intersecting object. It shares some of properties of two kinds of objects: empirical objects as well as theoretical objects. Indeed, we face now objects that share properties of these two kinds of objects: empirical and theoretical. They are called the intersecting objects. These objects are empirical entity: they are not the result of a conceptual construction. They are intersecting insofar as they are also a meeting point for several scientific disciplines and so can be studied by theoretical objects proposed by those disciplines. Intersecting objects have a double transcendence: an empirical transcendence and an epistemic one. According to the first transcendence, intersecting objects exceed any experience we may have of them. While they exist, it is not possible to circumscribe them through empirical or scientific experience. According the epistemic transcendence, intersecting objects exceed any conceptual characterization we may propose of them, which cannot even be used as a reasonable approximation to study them. Intersecting objects are then objects of special interests since they require relying on many disciplines while they exceed the sum of them. City satisfies these characteristics. Therefore, it is an intersecting engineering design object.
2. Intelligent designing of complex cities Engineering design can be analyzed, synthesized and validated through dynamic engineering models. In the past, there were many attempts to draw up models to handle the complexity management of design process in systematic steps [8-13]. The goal of design engineering is the conversion of a perceived need or a technical problem into information from which a product can be built in sufficient quality and reasonable cost to meet the needs or to overcome the problem [14-15]. Design process usually starts with the identification of a need, proceeds through a sequence of activities to seek for a solution to the problem, and ends with a detailed description of the product or the technical system. Both functional modeling and structural modeling have been studied [16-23]. The development of theories of innovation [24] and their application in different design problems has been investigated [25]. Our claim is that engineering design theories and practices can be applied in the engineering design and planning of complex city [3]. However, from the properties of the city, there is a need to create and develop approaches for intelligent designing of complex cities driven by dynamic distributed data and knowledge. The question of robust or consensual key data extraction is primordial. Simulation and evolution of the dynamic solutions are also important. 2.1. Citizen Problem Space and Citizen Models The definition of the design problem in terms of what citizens like is an important part of the design process. The rapidity of densification and growth of city, changing and the evolution of different actors of the urban scenery makes the problem definition as never final. Therefore, the movement of the problem in time depends on the movement of populations (citizens) and different actors of the urban scenery. The space of problem defined from the citizens is called Citizen Problem Space.
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
437
Experimental model of citizen domain is the first model developed in this application. Understanding of “what citizens want”, its progress and advancement can be achieved by observing the dynamics of interactions between different citizens in real time. Within engineering design of a complex city like Shanghai, large quantities of information and knowledge are widely distributed across citizens. The interaction between the citizens and different objects considering the task of citizens and roles of these objects is also carried out (Figure 1). The problem then consists in analyzing the real interactions between citizens. During interactions, citizens communicate their thoughts verbally or in writing. Experiences show that the majority of real problems appear through verbalizations and writings. Therefore, the verbal and written communication offers us a direct path to the citizen requirements. For that reason, we consider a message as being a form of the representation of a problem. It can be characterized by a set of syntactic elements with a specific semantics to a domain of knowledge. The category of these elements is called analysis entities [26].
Figure 1. Real interactions between citizens.
Computational model of citizen domain and Mathematical model of citizen domain are used to study both citizen and automated organization as computational entities. Interactions have been viewed as inherently computational. Every interaction can be filtered by means of analysis entities. Clustering the entities of analysis can be considered a principle for citizen-problem discovering. Clustering permits to identify families of analysis entities (Figure 1). Mathematically, the search for interaction families and analysis entities families is a problem of search for simultaneous partitions of the two sets, the filtered interactions set and analysis entities set in correspondences or in quasi-correspondences class of partition to class of partition. Hence, this correspondence permits to characterize an interaction family by the corresponding
438
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
analysis entities family that is by the corresponding citizen problem. If the families of state-problems are mutually exclusive, it is clear that the state-problems are completely independent. In practice, depending on the particular nature of the citizen problems, some or all of the state-problems result in either being mutually independent or not being as such. This means that interactions create "state-problems within a stateproblem". Conceptual model of citizen domain is developed from the interpretation of the results of computational model of citizen domain. Computational analysis permits a better understanding of the interactions between citizens, the nature of problems, the emergent patterns and structures of organization during interactions. The simulation of the flow of the citizens from main hospitals of Shanghai shows what are the accessible zones that the citizen reach travelling by subway or by foot during 30 minutes (Figure 2). The design for configuration of the city should consider the optimal distribution of the hospitals.
Figure 2. Distribution of hospitals in Shanghai.
2.2. Agent based computational models The Citizen Problem Space is bridged to Functional Problem Space. The functional problem is formulated in response to the citizen problem. The functional problem is reformulated also in response to intermediate solutions, and co-evolves with the design solution. Design solution belongs to the Solution Space. Process Space also interacts with Solution Space. The solution problem is formulated in response to the process problem (for instance, the maintenance of city). Thus the design solution can only be consensual: satisfying both functional problem and process problem. This model of design depicts an evolutionary system composed of four evolutionary spaces. The evolution of each space is guided by the most recent
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
439
population in the other space. It is a co-evolution. It provides the basis for a multi-agent computational model of engineering design of the city. From the field of Distributed Artificial Intelligence, agent-based systems are characterized by the distribution of knowledge and information needed to solve a problem on a set of interacting agents, able to continue and reach a global goal. An agent-based system is a society of autonomous agents cooperating to achieve a global objective through interaction, communication, or transaction. Fuzzy agents emerged as a tool to model uncertain behavior problems in engineering design [27-29]. Fuzzy agents are also used in fuzzy reasoning situations, where agents interpret a situation, solve a problem, or decide with fuzzy knowledge. Fuzzy agent Di
εj εj Observe : ) ~ 3
(D~ i )
Message/Event manager
πk
Gn Decide : ) ~ '
(D~i )
~ ) KB ( . D~i
{States, Rules}
δl γm Act : ) ~ *~
(D i )
Action manager γm
Figure 3. Functional architecture of fuzzy agents.
~ A fuzzy agent-based system 0 D is defined by (1): ~
~~ ~ ~
0 D $, , , 5 ,2 !
(1)
~ ~ where $ is the fuzzy set of fuzzy agents, , is the fuzzy set of interactions ~ ~ ~ ~ defined in 0 D , 5 is the fuzzy set of roles that fuzzy agents of $ can play, and 2 is ~ the fuzzy set of organizations defined for fuzzy agents of $ . Many agent structures are inspired by the cycle (Figure 3). Thus, a fuzzy agent D~i is described as follows (2):
~
D~i ) 3~ (D~i ),) '~(D~i ),) *~(D~i ), . D~i ! Where:
(2)
440
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
~ ~ ~ ) 3~ (D~i ) : 6 u 6 D~i o 3 D~i is the function of perceptions of D~i : 6~ is the fuzzy set of ~ ~ ~ ~ states of 0 D ; 6 D~ 6 is the fuzzy set of states of 0 D that D~i knows, 3~ is the i
~ ~ ~ fuzzy set of perceptions in 0 D , and 3 D~i 3 is the fuzzy set of perceptions of D~i ; ~ ~ ~ ) '~(D~i ) : 3 D~i u 6 D~i o 'D~i is the function of decisions of D~i : '~ is the fuzzy set of ~ ~ ~ fuzzy decisions defined in 0 D , and 'D~i ' is the fuzzy set of decisions of D~i ; ~ ~ ~ ) *~(D~i ) : 'D~i u 6 o *D~i is the function of actions of D~i : *~ is the fuzzy set of ~ ~ ~ actions which can be performed in 0 D , and *D~i * is the fuzzy set of actions that D~i can process; ~ ~ ~ ~ ~ . D~i . , with . D~i 'D~i 6 D~i , is the fuzzy set of fuzzy knowledge of D~i : ~ ~ . is the fuzzy set of fuzzy knowledge defined in 0 D . Knowledge of D~i is composed of decision rules, values on the domain, acquaintances and dynamic knowledge, as observed events or internal states. Citizen domain
Requirement agents
Process domain
Functional domain Function agents
Constraint agents
Solution
agents
For instance: {mobility, ecology, sustainability, …}
Physical domain
Figure 4. Agent-based architecture of F-ACCID platform.
The proposed platform to assist the problem of product configuration is called FACCID (Fuzzy Agents for Complex City Intelligent Design). This platform (Figure 4) is composed of three levels: 1) Communication and cooperation level. It implements services of communication and cooperation for fuzzy agents of F-ACCID (interface agents and design agents). 2) Design fuzzy agents’ level. It is divided into four fuzzy communities of agents: (1) fuzzy community of citizen requirements agents that interact with the fuzzy community of function agents, in response to requests from the citizen requirements agents, (2) fuzzy community of function agents that interact with each other and with
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
441
fuzzy communities of citizen requirements agents and solution agents, (3) fuzzy community of solution agents that may interact with each other and with fuzzy communities of function agents and city constraints agents, and (4) fuzzy community of city constraints agents that interact with the fuzzy community of solution agents, in response to requests from the city domains agents. 3) Interface level. It supports the connection of different human actors of configuration (experts and customers) by use of software micro-tools (μ-tools [63]); these μ-tools communicate the orders' actors to associated city domains agents, who might transmit them to the fuzzy communities of citizen’s requirement agents and fuzzy city constraints agents. 2.3. Multi-scale and holonic city From the second property, the city is a multi-levelled hierarchy of semiautonomous sub-wholes, branching into sub-wholes of a lower order, and so on to form a holon. Each sub-whole within the hierarchic tree has two properties: it is a whole relative to its own constituent parts, and at the same time a part of the larger whole above it in the hierarchy. We define a city cell as a holon entity. All city functions must be performed and completed in their entirety as independently as possible. One of the essential requirements of the city cell is the capacity for independent actions. Each city cell must itself be a city cell. This means creating "city cells within a city cell".
Figure 5. City Cell-within-City Cell.
The city cells largely structure themselves and together serve the whole system of the city. Reference can be made to the principle of regulating city functions and/or city solutions that can control the behavior of independent city cells. Thus, the internal relationships within a city are closer and more intensive than the relations with the outside. City cells are self-similar also. Here, the city functions are grouped so they are performed and completed in their entirety as independently as possible. The relationship between city solutions in different levels of the "city cells within a city cell" (Figure 5) allows finding regulating function or solutions that can control the behavior of autonomous city cells. Then, the city cells can largely organize themselves in consensual configurations. Figure 6 shows the fuzzy solutions agents of F-ACCID platform during the seeking the consensual configurations. From the dynamic
442
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
overlapping of local behaviour of fuzzy solutions agents in different configurations emerge the consensual configurations. The consensual configurations are shown in diagonal blocks.
Figure 6. Fuzzy solutions agents of F-ACCID platform during the seeking the consensual configurations.
3. Conclusion City is a complex object. City can be considered an evolving living body in complex interaction with its citizens, its artificial physical environment, and its natural physical environment. City is a multi-physic, multi-agent, multi-stratified and multi-scale object. City is also an intersecting object. It shares some of properties of two kinds of objects: empirical objects as well as theoretical objects. Based on these properties, this paper proposes an approach for intelligent designing of complex cities driven by dynamic distributed data and knowledge. The definition of the design problem in terms of what citizens like is an important part of the design process. This model of design depicts an evolutionary system composed of four evolutionary bridged spaces: Citizen Problem Space, Functional Problem Space, Solution Space and Process Space. The rapidity of growth of city, changing and the evolution of different actors of the urban scenery makes the problem city design as never final. The citizen requirements are defined from citizen interactions. The Functional Problem Space is formulated in response to the citizen problem defined in the Citizen Problem Space. The functional problem is reformulated also in response to intermediate solutions, and co-evolves with the design solution. Process Space also interacts with Solution Space. The solution problem is formulated in response to the process problem. Thus the design solution can only be consensual: satisfying both functional problem and process problem. The evolution of each space is guided by the most recent population in the other space. It is a co-evolution design. City can also open new ways for a trans-disciplinary research whose finality could be the elaboration of a new discipline with its own reality (city) and its own fundamental concepts.
B. He et al. / Intelligent Engineering Design of Complex City: A Co-Evolution Model
443
References [1] S. Campbell, Green cities, Growing cities, Just cities? Urban planning and the contradictions of sustainable development, Journal of the American Planning Association, 62(3) (1996), 296-312. [2] B. Hillier, The City as a Socio-technical System: A Spatial Reformulation in the Light of the Levels Problem and the Parallel Problem, in S.M. Arisona, G. Aschwanden, J. Halatsch, P. Wonka (eds), Digital Urban Modeling and Simulation, Springer Berlin Heidelberg, 24-48, 2012. [3] C. Derix, A. Gamlesæter, P. Miranda, L. Helme, K. Kropf, Simulation Heuristics for Urban Design. In S.M. Arisona, G. Aschwanden, J. Halatsch, P. Wonka (eds), Digital Urban Modeling and Simulation, Springer Berlin Heidelberg, 159-180, 2012. [4] E. Ostrosi, F. Pfaender, D. Choulier, A.-J. Fougères and MZ. Tzen, Describing the engineering modeling knowledge for complexity management in the design of complex city , Proceedings of the 19th International Conference on Engineering Design (ICED13), Seoul, Korea, 19-22 August 2013. [5] E. Ostrosi, L. Haxhiaj and S.Fukuda, Fuzzy modelling of consensus during design conflict resolution, Research in Engineering Design, 23(1) (2012), 53-70. [6] D.C. Korten, Sustainable Development, World Policy Journal, 9(1) (1991), 157-90. [7] J.M. Berry and K.E. Portney, Sustainability and Interest Group Participation in City Politics, Sustainability, 5(5) (2013), 2077-2097. [8] M.J. French, Engineering design: the conceptual stage, Heinemann Educational Books Ltd, London, 1971. [9] G. Pahl and W. Beitz, Engineering Design: A Systematic Approach, The Design Council, London, 1984. [10] V. Hubka and W.E. Eder, Theory of technical systems, Springer Verlag, New York. [11] N. Suh, Principles of Design, Oxford University Press, New York, 1988. [12] K. Ulrich and S.D. Eppinger, Product design development, McGraw-Hill, New York, 1995. [13] K.N. Otto and K.L. Wood, Product Design Techniques in Reverse Engineering and New Product Development, Prentice Hall, Upper Saddle River, NJ, 2001. [14] D.G. Ullman, The mechanical design process, McGraw-Hill, New York, 1992. [15] C. Hales, Managing engineering design, Longman Scientific & Technical, Harlow, England, 1993. [16] F.J. Erens and K. Verhulst, Architectures of Product Families, Computers in Industry, 33(2-3) (1997), 165-178. [17] R.V. Welch and J.R. Dixon, Representing function, behavior and structure during conceptual design. In Taylor D.L. and Stauffer L.A. (eds.) Design Theory and Methodology, American Society of Mechanical Engineers, 42 (1992), 11-18. [18] J.S. Gero and U. Kannengiesser, The situated Function Behaviour framework, Design Studies, 25(4) (2002), 373-392. [19] A. Chakrabarti and L. Blessing, Representing functionality in design, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 10 (1996), 251-253. [20] B. Chandrasekaran, Representing Function: Relating Functional Representation and Functional Modeling Research Streams, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 19 (2005), 65–74. [21] B. Chandrasekaran and J.R. Josephson, Function in Device Representation, Engineering with Computers, 16(3-4) (2000), 162–177. [22] M.S. Erden, H. Komoto, T.J. VanBeek, V. D’Amelio, E. Echavarria, and T. Tomiyama, A Review of Function Modeling: Approaches and Applications, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 22 (2008), 147–169. [23] J. Hirtz, R.B. Stone, D.A. McAdams, S. Szykman and K.L. Wood, A Functional Basis for Engineering Design: Reconciling and Evolving Previous Efforts, Research in Engineering Design, 13 (2002), 65–82. [24] G. Altshuller, And suddenly the inventor appeared, Technical Innovation Center, Inc. Massachusetts, 2002. [25] D. Choulier, Découvrir et appliquer les outils de TRIZ, Chantier – UTBM, Belfort, France, 2011. [26] R. Movahed-Khah, E. Ostrosi and O. Garro, Analysis of Interaction Dynamics in Collaborative and Distributed Design Process, Computers in Industry, 61(2) (2010), 2-14. [27] E. Ostrosi and A.-J. Fougères, Optimization of product configuration assisted by fuzzy agents, International Journal on Interactive Design and Manufacturing, 5(1) (2011), 29-44. [28] E. Ostrosi, A.-J. Fougères, M. Ferney and D. Klein, A fuzzy configuration multi-agent approach for product family modelling in conceptual design, Journal of Intelligent Manufacturing, 23(6) (2012), 2565-2586. [29] A.-J. Fougères and E. Ostrosi, Fuzzy agent-based approach for consensual design synthesis in product configuration, Integrated Computer-Aided Engineering, 20 (2013), 259–274.
444
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-444
A Closed-loop Based Framework for Design Requirement Management Zhinan ZHANGa,b,1, Xuemeng LIb, Zelin LIUc School of Mechanical Engineering, Shanghai Jiao Tong University, China b Department of Management Engineering, Technical University of Denmark, Denmark c Shanghai Aircraft Design and Research Institute, Commercial Aircraft Corporation of China Ltd., China a
Abstract. Requirement management plays a crucial role in determining a successful engineering design project. The focus of current requirement research is on the development of requirement elicitation, analysis and formalization methods and tools. Moreover, the existing requirement research often pays attention to the fuzzy front end of product design process. In fact, there exists more needs for requirement knowledge at each stage of a product lifecycle and requirement also has its own lifecycle. However, the research in the field of engineering design lack of a framework to support requirement management from product lifecycle, and requirement and requirement management lifecycle views. This paper highlights the importance of requirement lifecycle management and aims at closing the requirement information loop in product lifecycle. Then, it addresses the requirement management in engineering design field with focusing on the dynamics nature and incomplete nature of requirements. Finally, a closed-loop based framework is proposed for requirement management in engineering design. Keywords. Requirement management, requirement lifecycle, closed-loop, engineering design
Introduction Requirement management (RM) plays a key role in determining a successful product development [1], which is a wide research field involving marketing research, business studies, psychological studies, human factors, social factors, software engineering and artifact design [2]. Analysis the literature shows that requirement research is paid sufficient attention in the field of software engineering and information systems [3, 4]. Although, the importance of requirement management in engineering design has been widely acknowledged in design society [5-9], as pointed by Darlington and Culley [10], engineering design requirement is a relatively poorly researched area in design studies. Searching requirement research in prestigious design journals, such as Design Studies (6), Research in Engineering Design (3), Journal of Engineering Design (10), Artificial Intelligence for Engineering Design Analysis and Manufacturing(3), Computer-Aided Design(5), Journal of Mechanical design (0), Journal of Computing and Information 1
Corresponding Author: Zhinan Zhang, School of Mechanical Engineering, SJTU, 800 Dongchuan Road, Shanghai; E-mail:
[email protected]
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
445
Science Engineering(4), Concurrent Engineering: Practice and Application (13) , and Advanced Engineering Informatics (4), verified that only 48 papers have been published since the year of 2000 (Note that the date for searching is March, 2014, and the search engine is ISI Web of Knowledge). The research area of design requirements in the aforementioned design journals has developed some approaches and tools for requirement elicitation, requirement analysis, requirement management and for understanding the characteristics of requirement. However, from the requirement lifecycle and requirement management lifecycle view of points, to our knowledge, there still a lack of a closed-loop based approaches or tools for requirement management in relation to engineering design. This paper devotes effort to develop a closed-loop based framework for a better design requirement management.
1. Literature review Due to its significance, considerable studies have been carried out on requirement management in engineering design community (e.g., [5, 7-9]). Due to limited space, only several typical related research works are briefly reviewed as follows. More complete reviews on requirement in the area of engineering design or product design can be found in the review papers presented by Darlington and Culley [10], and by Jiao and Chen [2]. Brace and Cheutet [11] defined a framework to develop a systematic approach. Based on the approach, they presented a model driven approach for deriving requirement. Zenun and Geilson [12] proposed a framework for completeness in requirement engineering and applied the framework in aircraft maintenance scenario. Robertson and Robertson [13] gave a plenty of advice on techniques for eliciting requirement. Wang and Zeng [14] proposed a generic process for eliciting product requirement by asking questions based on linguistic analysis. A software prototype is also developed to support the proposed process. Cascini et al. [15] explored how to situate needs and requirements in Gero’s FBS [16, 17] framework. Xu et al. [18] developed an analytical Kano model to quantitative analyze and classify customer needs. Darlington and Culley [19] used an empirical study to investigate and model the influencing factors to design requirement. Liu et al. [20] proposed a scenario-based approach for the management of design requirement. Baxter et al. [21] developed a framework for the integration of design knowledge reuse and requirements management. This framework enables the application of requirements management as a dynamic process. Gershenson and Stauffer [22] developed a taxonomy for the classification of corporate requirements. Corporate requirements come from internal sources such as marketing, finance, manufacturing, and service that reflect the internal needs of corporate on product development. Rounds and Cooper [23] presented and applied taxonomies of environmental issues to the development of product design requirement. By integration of the requirement classification works by Ullman [9] and Salonen et al. [24], requirement can be classified into: 1) functional performance requirement; 2) human factor requirement; 3) physical requirement; 4) reliability and feasibility related requirement; 5) lifecycle concern requirement; 6) resource concern requirement; 7) manufacturing and assembly requirement; 8) installation and use related requirement; 9) service related requirement; and 10) economical and technical related requirement.
446
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
In fact, the above ten classes of requirements can be reclassified into three categories based on a product lifecycle view: 1) BOL (Begin of Life, including planning, design, and production ) related requirement; 2) MOL (Middle of Life, including use, service and maintenance) related requirement; and 3) EOL (End of Life, including reuse, material reclamation and disposal) related requirement. In an analogy with the lifecycle of a product or a piece of knowledge, a piece of requirement also has its lifecycle. Therefore, it needs a lifecycle oriented framework the understanding and management of design requirement.
2. Understanding design requirement A better understanding of design requirement is a precondition for the development of a feasible requirement management framework. From a research perspective, the focus of the most current design requirement research is on the design object related requirement. However, in the existing works in this field, there is still a lack of design requirement research with considering both design object and design process aspects. Moreover, there also rarely exists a requirement lifecycle oriented management framework. In order to contribute to the research in design requirement management, it is of first important to explore what design is, what design requirement is and the connection of design requirement with design and design knowledge themselves. 2.1. Understanding design What is design? Many prestigious scholars in design community have discussed its definition (e.g. [6-7, 16]). As pointed by pioneer studies, “to design is to pull together something new or to arrange existing things in a new way to satisfy a recognized need of society” [7]. Hence, the word design can be either a noun or a verb. The verb form of design is designing (i.e., design process), which refers “to conceive or to form a plan for”. The purpose of designing is to transform design requirement into a solution for production, BOL and EOL. The noun definition of design is also design itself (i.e., design object), which often refers to “the form, parts, or details of something according to a plan”. Both design and designing can be ontologically illustrated by Figure 1, as that presented by Gero et al. [17] and Ullman [9].
Figure 1. Design and design process
As shown in Figure 1, design object is about what the requirement (R), solution or structure (S), and behavior (B) should be; design process is about how designers fulfill the design activities of synthesis, analysis and evaluation for the transformation of requirement into a desired solution. Design process can be viewed as a series of decision nodes (see Figure 2). The decisions made on each node are based on its existing design knowledge and the gained new design knowledge; the design
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
447
knowledge is classified into design object knowledge and design process knowledge by Hubka and Eder [25]. Design requirement is also a kind of design knowledge. In this regard, design requirement should also consist two parts, i.e., design object related requirement and design process related requirement.
Figure 2. Elements of a decision node
Today’s engineering design especially the design of complex long service life product (e.g., air crafts, continuous casting machines, ships etc.), should both take the design stage and the after design stage into account, see Figure 3. In this circumstance, the design does arrange existing things or pull together something new in a new way to satisfy a recognized need of society and the whole product lifecycle, which requires more information flow or knowledge flow between different user groups and projects [26]. Therefore, today’s design requirement management is more complex than that have been explored in existing works. Design stage Market information Competitive product information Organizational strategy
After design stage
Ă
Design solution
Design planning
Knowledge flow Knowledge reuse
Input knowledge
Solution evaluation
Manufacture
D
Design activities
MOL
EOL
Output knowledge
One decision node
Figure 3. Product lifecycle and closing the information loop
2.2. Understanding design requirements In the engineering design field, the characteristics of design requirement are highly related to the nature of design or design knowledge itself. Based on the above understanding of design, it should be confirmed that design requirements can classified into (see Figure 4): 1) design object related requirement, and 2) design process related requirement. The classification of design requirement is similar to that of design knowledge by Hubka and Eder [25]. Figure 5 is an ontological framework for the representation of both design object and design process and also the design knowledge required for each design activity. x Design object related requirement It has been widely recognized that customer value, product quality, cost and etc., are all factors that can be improved by effective requirement management. In fact, these factors are all design object related requirement. In the front end of product development, it needs effort to better understand customer requirements. It is the start point of a business successful product, which named as “do the right thing”, see the right part of Figure 5. Detailed description of object related requirement can be found
448
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
in engineering design texts (e.g., [5, 7, 8]). As mentioned by Dieter and Schmidt [7], in much of new product design, 40 percent are existing parts reused without modification, about 40 percent are existing parts used with minor modification, and only 20 percent of the parts are new. It can be concluded that most of information and knowledge are reused from previous design. For example, up to 70% of information is reused from previous solution in the case of variant design [27]. Therefore, in order to support the reuse of design knowledge in an efficient and effective manner, design object related requirements should be presented as a component of design object knowledge. It is another guarantee of a successful product, which improved the probability of “do the thing right”, see the left part of Figure 5.
Figure 4. Design requirements
x Design process related requirement As shown in Figure 2, designer is the key element of a decision node. Designers fulfill design activities to complete design tasks. A design activity can be characterized as a goal-oriented, constrained, decision-making, exploration, and learning activity that operates within a context that depends on the designer’s perception of the context [16]. As shown in Figure 2, in order to complete a design activity, a designer has the process related requirement for input information, know-how knowledge and also context knowledge. Effective process requirement management can improve the efficient and effective of design work. Therefore, the management of process related requirement should be paid sufficient attention.
Figure 5. Design requirement (after Zhang et al. 2013)
There may be too much characteristics of design requirements; the focus of this paper is on the following two natures of design requirements. x Incomplete nature of design requirements Design knowledge is incomplete [7, 28]. In analog with the nature of design knowledge, design requirement is also incomplete. The requirement development
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
449
process is also an evolution process of requirement knowledge, i.e., the state of requirement knowledge will be changed from an initial high degree of incompleteness into a final considerably complete state. It should be note that, there will be no absolutely complete requirement knowledge. It is similar to that as a satisfied solution stated by Herbert Simon. As shown in Figure 5, each concept (i.e. P, E, F, and C) in the figure can be viewed as a requirement knowledge set for product planning. At initial design stage, the set of requirement knowledge is incomplete and new requirement knowledge should be acquired to improve its degree of completeness. For example, a complete requirement knowledge set about a customer need and environment can be represented as P= (PG, PA, PO ) and E= (ES, EN, EL, EO) , respectively. PG stands for the goal, PA is used for describing the actions sequentially taken by a customer to achieve his goal, and PO explains the desired artifact described by a customer. ES represents the constraints from a social aspect (e.g. laws, regulations and culture). EN describes the constraints from a nature aspect (e.g. humidity and temperature). EL refers to the constraints from product lifecycle operations (e.g. transportation and maintenance). EO is used for describing the environmental entity, which is indispensable for an artifact to work properly (e.g. gasoline is necessary for the operating of gasoline engines, charging pipes are necessary for e-cars). For example, in the beginning of a design, designers only have the requirements set of P’= (PG, ?, ? ), E’=(?,?,?,? ) to achieve his complete requirements knowledge sets P and E, the designers have to acquire the needed new requirement knowledge sets P*= (?, PA, PO ) and E*= (ES, EN, EL, EO ) to construct a complete requirement knowledge set. x Dynamics nature of design requirements According to the incomplete nature of design requirement knowledge, we know that the state of requirement knowledge is dynamic. The dynamics of requirement knowledge refers to the right requirement at the right time for the right participant. The dynamics nature means 1) the evolution of design requirement knowledge from an incomplete state into a complete one, 2) changing the form of design requirement knowledge from one into another (i.e. from informal to formal, from tacit into explicit), and 3) transferring design requirement knowledge from one decision node to another. The dynamic nature of design requirement knowledge describes the state of requirement knowledge within a specific scenario. As have been explored by Dieter and Schmidt [7], a good design should consider 1) achievement of performance requirement, 2) life-cycle issues, and 3) social and regulatory issues. All the three considerations may be a scenario which drives the evolution of design requirement knowledge from an initial incomplete state to a desired state. The environment refers to the inner or outer factors which influence a design. It should be remember that requirement knowledge is a dynamic resource, which is constantly changing. Therefore, a novel requirement management framework is necessary for guiding designers to understand the change of requirement knowledge and reuse design knowledge the design process.
450
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
3. Framework development The proposed framework aiming at managing design requirement (includes both design object and design process requirements) taken the nature of design requirement into consideration. Due to the social, technical and cognitive characteristic of design, the attentions to social and cognitive issues are also of prominent important to requirement management, but it is out of the scope of this paper. The focus of RM is on the technical characteristics of design, i.e., the development of technical framework of RM 3.1. The closed-loop requirement management concept According to the affordance-based relational design theory [29], customer, actor and product should provide affordable requirement information between each other. Therefore, a closed-loop [30] requirement management will allow the actors (i.e. designer, manager, production, service, maintenance, recycler engineers, etc.) who play roles during the lifecycle of a product development to elicit, analysis, transfer, manage and utilize requirement information at any stage of its lifecycle (i.e., design, production, MOL and EOL) without limitation to time and place. Figure 6 shows the closed-loop requirement management (RM) concept. The concept requires a RM system to support closing the information loop in product lifecycle and in the actor networks (customer, product, designer).
Figure 6. The closed-loop requirement management concept
As shown in Figure 6, the main elements of the closed-loop RM concept are: x
RM system to support the capture, modeling, retrieval, reuse and update of requirement information
x
Knowledge flows (includes data and information) to support decision making of each actors (includes customers)
x Scenarios for the understanding of requirement to different actors. According to the above concept of closed-loop RM, the main functions of the concept are: x
Closing the information loop in product lifecycle, aiming at gaining a better performance of transfer, sharing, application and reusing of requirements
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
x
451
Closing the requirement lifecycle, aiming at improve the degree of completeness of requirement knowledge and the performance of RM.
3.2. Closed-loop requirement management framework Figure 7 illustrates a diagram of the RM framework. The basic units of this framework are the requirement elicitation (RE), requirement analysis (RA), and requirement transfer (RT), requirement application (AAP) and requirement management system (RMS). The extended FBS framework (see Figure 1 and 5) can be employed to discuss the above units.
Figure 7. The closed-loop RM framework
x Requirement elicitation The process of RE can be represented in a clearly defined structure as: [Data Source]→[R Capture Methods]→[R Data] The function of RE is to capture raw data from several data sources, e.g., customer voice, social voice, technical voice, economical voice, designer voice and product data, etc. these data sources can be categorized into: customer, society, corporate and product, and supporting facilities related requirement data. The methods and tools (e.g., interview, observation, brainstorm, questionnaire, benchmarking etc.) for the capture of requirement data have been given sufficient attention in literature. It will not be discussed here. The focus of RE is on the management the output of RE process and construct scenario for the shared understanding of requirement data. x Requirement analysis The process of RA can be represented in a clearly defined structure as: [R Data]→[R Methods]→[R Information] Kano model [18] and QFD method [31] are widely used for the translation of requirement data into requirement information. The outputs of RA are function requirement, constraint requirement and actors’ knowledge requirements. x Requirement transfer The process of RA can be represented in a clearly defined structure as: [R Information]→[R Transfer Methods]→[Formal or Structure R] The function of RT is to provide actors with an easier way to retrieval and understand the content of requirements. A scenario-based approach [20] can be employed to represent requirement in a formal way and thus to assist RT.
452
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
x Requirement application The process of RAP can be represented in a clearly defined structure as: [R Information]→[R Interpret Methods]→[ R Knowledge] The function of RAP is to provide actors with requirement knowledge to drive effective decision makings. The SBF and 5W1H (i.e., who at where and when, why and how to do what) framework can be employ to assist requirement management for application. x Requirement management system A RM system will provide affordable functions to manage the elicitation, analysis, transfer and application processes and the information or knowledge created in these processes. All the requirement related activities in a corporate should be record in the RM system.
4. Conclusions and future work The objectives of this study are to highlight the importance of requirement lifecycle management and closing the requirement information loop in a product lifecycle. We address the requirement management in engineering design field with focusing on the dynamics nature and incomplete nature of requirements. The two natures explores that there is a need of a lifecycle oriented approach for requirement management, i.e., requirement and requirement management lifecycle, and embedded requirement into product lifecycle. In analogy with design knowledge, two types of requirement (design object related requirement, and design process related requirement) are recognized. The concept of closed-loop requirement management is then proposed with emphasizing consumer, product, actor and context as key elements. Furthermore, a closed-loop based framework was proposed to provide affordable functions for actors to manage requirement lifecycle information. Further work needs to be done for a better understanding of design requirement, and the requirement information loops should also be identified in industry using deep case studies. The benefit and weakness of the proposed framework should be assessed and improved.
Acknowledgement The authors acknowledge the support for this research from the National Science Foundation of China (51205247), the Research Project of State Key Laboratory of Mechanical System and Vibration (MSVZD201401) and the Europe-China High Value Engineering Networks (EC-HVEN) project.
References [1] A. Mckay, A. de Pennington and J. Baxter, Requirement management: A representation scheme for product. Computer-Aided-Design, 33(7) (2001), 511-520. [2] J. Jiao, and C.H. Chen. Customer requirement management in product development: A review of research issues. Concurrent Engineering Research and Application, 14(3) (2006),173-185.
Z. Zhang et al. / A Closed-Loop Based Framework for Design Requirement Management
453
[3] V. Sinha, B. Sengupta and S. Chandra, Enabling Collaboration in Distributed Requirements Management. IEEE Software, 10(2006), 52-61. [4] M. Lang, and J. Duggan. A Tool to Support Collaborative Software Requirements Management. Requirements Engineering, 6(2001),161–172. [5] G. Pahl, and W. Beitz, Engineering design. A systematic approach (3rd ed). Wallace, K. and Blessing, L., translation and edition. Berlin: Springer, 2007. [6] N.P. Suh, Axiomatic design: Advances and application. New York: Oxford University Press, 2001 [7] G.E. Dieter, and L.C. Schmidt, Engineering design (5th ed). New York: McGraw-Hill, 2012. [8] K.T. Ulrich, and S.D. Eppinger, Product design and development (5th ed). New York: McGraw-Hill, 2011. [9] D.G. Ullman, The mechanical design process(4th ed) . New York: McGraw-Hill, 2009. [10] M.J. Darlington, and S.J. Culley. Current research in the engineering design requirement. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 216(2002), 375-388. [11] W. Brace, V. Cheutet, A framework to support requirements analysis in engineering design. Journal of Engineering Design, 23(12) 2012, 873-901. [12] M.M.N. Zenun, and L. Geilson. A framework for completeness in requirements engineering: An application in aircraft maintenance scenario. In: Bil, Cees (Editor); Mo, John (Editor); Stjepandic, Josip (Editor). 20th ISPE International Conference on Concurrent Engineering: Proceedings. Amsterdam, Netherlands: IOS Press, 2013, 569-578. [13] S. Robertson, and J. Robertson. Mastering the requirements process: Getting requirements right (3rd Ed) Addison-Wesley Professional, 2012. [14] M.Wang and Y. Zeng, Asking the right questions to elicit product requirements, International Journal of Computer Integrated Manufacturing, 22(4)(2009), 283-298 [15] G.Cascini, G. Fantoni, and F. Montagna. Situating needs and requirements in the FBS framework. Design Studies, 34(5)(2013), 636-662. [16] J.S. Gero, Design prototypes: A knowledge representation schema for design. AI Magazine, 11(4)(1990), 26-36. [17] J.S. Gero, and U. Kannengiesser. A function–behavior–structure ontology of processes. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 21(4)(2007), 379-391. [18] Q.L. Xu, J. Jiao, X. Yang and M. Helander. An analytical Kano model for customer need analysis. Design Studies, 30(1)(2009) ,87-110. [19] M.J. Darlington, and S.J. Culley. A model of factors influencing the design requirement. Design Studies, 25(2004), 329-350. [20] Z.L. Liu, Z.N. Zhang, Y. Chen, A Scenario-based approach for requirements management in engineering design. Concurrent Engineering: Research and Applications, 20(2) (2012), 99-109. [21] D. Baxter, J. Gao, K. Case et al., A framework to integrate design knowledge reuse and requirements management in engineering design. Robotics and Computer-Integrated Manufacturing, 24(2008), 585593. [22] J.K. Gershenson, and L.A. Stauffer, A Taxonomy for Design Requirements from Corporate Customers. Research in Engineering Design, 11 (1999),103–115. [23] K.S. Rounds, and J.S. Cooper, Development of product design requirements using taxonomies of environmental issues. Research in Engineering Design, 13 (2002), 94–108 [24] M. Salonen, C.T. Hansen, and M. Perttula. Evolution of property predictability during conceptual design. International Conference on Engineering Design (ICED 05), Melbourne, August 15-18, 2005 [25] V. Hubka, and W.E. Eder. Design science: introduction to needs, scope and organization of engineering design knowledge. Springer Verlag, 1996. [26] G. Vianello, S. Ahmed. Transfer of knowledge from the service phase: a case study from the oil industry. Research in Engineering Design, 23(2)(2012), 125-139. [27] D.V. Khadilkar, and L.A. Stauffer, An experimental evaluation of design information reuse during conceptual design. Journal of Engineering Design, 7(4)(1996), 331-339. [28] Z.N. Zhang, Z.L, Liu, Y. Chen, Y.B. Xie, Knowledge flow in engineering design: An ontological framework. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 227(4)(2013), 222 - 232. [29] K. Dimitris, Closed-loop PLM for intelligent products in the era of the internet of things. ComputerAided Design, 43(2001): 479-501. [30] J.R.A. Maier, and G.M. Fadel. Affordance based design: a relational theory for design. Research in Engineering Design, 20(1) (2009), 13-27. [31] Y. Akao, Quality function deployment: integrating customer requirements into product design (st ed). Cambridge: Productivity Press, 2004.
This page intentionally left blank
Part VII Concurrent Engineering Education
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-457
457
Tools and Methods Stimulate Virtual Team Co-operation at Concurrent Engineering a
Jože TAVČARa,1 and Jožef DUHOVNIKa University of Ljubljana, Faculty of Mechanical Engineering, Ljubljana, Slovenia Abstract. Tools and methods are an important part of product development process. Advantages of different methods are growing with product novelty and complexity. Moderation of product development team meeting is a challenge at face to face meetings. Moderation has to be conducted with additional care at spatially distributed or virtual teams. The workshops or meetings related to different tools and methods are the core of concurrent engineering. Use of different methods stimulate co-operation inside Product Development team and at the end provides better products. The methods give content and structure to communication inside a product development team. An optimal percentage of team work depends on product novelty and complexity. Twenty percentage of activities need to be conducted as different kind of teamwork according to study [1]. There need to be creative dialog, conflicts of ideas and decision making. Team members stimulate each other's creativity. This paper presents what tools and methods are needed in different phases of the product development process. The generalised model of a virtual team workshop with maturity assessment criteria is presented next. On the example of the FMEA (Failure Modes and Effects Analyses) and customer's complaint workshops it is demonstrated what are specific requests for effective execution of the team workshop in distributed environment. Keywords. Tools and methods, virtual team, maturity level, concurrent engineering, creative dialog, moderation
Introduction A new product development begins with an idea (Figure 1). In the first invention loop the specification needs to be transformed into development goal, into the first version of product specification [2]. In the planning phase that includes system engineering and research the new product idea is transformed into a project definition. Product design is finally conducted inside the golden loop. The product level and complexity determine how dominant is a particular design phase [3]. At the original and innovative design the research loop is very important. Designing process inside the golden loop dominates at the variation and adaptive design level. CE principles are included inside several iterations or loops [2], [10]. When conceptual design is created inside the golden loop it is checked several times through all the criteria. If there is a decision at assessment point that a product design is not ready to market requests and specification the design iteration is repeated. At each product development process (PDP) phase specific knowledge and recognised working methods need to be used. In the Figure 1 is on the left side presented the specific knowledge and on the right side the working methods. If necessary, additional 1
Corresponding Author.
458
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
team or individual meetings can be planned. It is vital to ensure coordination and cooperation between the development process, production arrangement (technology, tool manufacturer), production process and the company’s management.
KNOWLEDGE
WORKING METHODS IN EACH PHASE
IDEAS REQUESTS ABOUT NEW PRODUCT GENERATION
Cost analyse, Marketing,
Add it abo onal kn ut t he p owled rob ge lem
Ecoloop, Materials in environmental
I NV EN
Study of customer requests Workshop for custemer requests analyses
Quality assuriance Tehnichal rules Product family
Assembly processes
List of the design requests specification and willings: Motor function requests, product architecture, functionality, dimensions and weight limits, material of key components, assembly requests, cost, price, efficiency, noise level, manufacturing capacity etc…..willings noise level, price, weight.
etc.
Social characteristics
CE S A P RO A
Presentation of core Design team
PRODUCT DISTRIBUTION AND USE
DESIGNERS TEAM ROOM
Laws characteristics
En pro d of d ces eve s lo the , entr mpe ma anc nt rke e o n t
Internal logistics
UK C I J S K E G
Noise reduction
Design of experiments Z
R AZ VOJ NO K O
Assessment of X – iteration of Designing process
A
KA AN
Fluid dynamic calculations
Z L AT
SHAPE
Electric design
Design FMEA External research teams for CFD, Process FMEA noise reduction, Commutation, Design for Fault diagnostics assembly
FUNCTION
Structural dynamic analysis
Workshop with external research team
Economical characteristics
Eksperimental methods,
APQP
Morfologycal matrix
TR NS
Networking of specific knowledge
Workshop on product architecture
Assembly line team, Selection of strategic suppliers for key components Innovative level of design Core development design team, external teams for specific knowledge (CFD calculation) Project plan, investment cost plan
Project managing
Komunication with customers,
QFD
New product requests Project definition, REQUEST AND WILLNES
Manufacturing of parts
Designing process
ges ellan d ch uct gnise od Reco r new pr n fo ratio gene
Function decomposition,
NA Z ANKA TI V
Research & Development process
Benchmarking
Cultural characteristics
External teams for Assembly, Tooling, Electric design
Figure 1. Product development process and working methods in each phase [3].
1. Generalized model of the virtual team workshop Different kinds of workshops are a meeting point of the interdisciplinary product development team and sub-teams. Creative dialog is happening during workshops, therefore it is very important how the workshops are conducted. It influences creativity
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
459
and productivity of the workshop. Preparation activities before the beginning of a project need to be done. The goals should be set clearly; adequately trained individuals should be selected with care. Each team member must be independent and he must show initiative. Infrastructure for seamless communication has to be set up [9].
Figure 2. Generalized model of the virtual team workshop
The virtual team workshops have several common characteristics. A workshop structure was setup into the generalized workshop model that is presented in Figure 2. The team members are not limited to one location. They can join the workshop via video conference. The team members need beside specific expertise the skills for team work and communication in virtual environment. Virtual teams are formed to carry out a specific workshop. The way of moderation has additional importance at non-permanent and virtual teams. A skilled moderator has to lead the team to the predefined workshop goals. He has to establish trustful and creative atmosphere. It is important to follow planed schedule and keep the focus. Moderation includes: time planning, checking if all the needed data is ready before the workshop start and focused introduction to the problem. Good organisation and creative atmosphere stimulate participants. Beside the well conducted moderation it is important to have a predefined workshop framework. It helps the team to work in a systematic way and improve the workshop outputs. It is important to split workshop into several sections like introduction, generation of ideas, syntheses of ideas, assessment and further planning. At the creative phase all “creasy ideas” are allowed without criticism. A clear decision making procedure that includes all virtual team members is an important element of efficient team work. The predefined framework is helpful especially at more complex tasks and inside heterogenic teams. The infrastructure for communication includes videoconferencing system and other means for communication like: e-mail, telephone, common server. At creative dialog there has to be as much as possible of communication means. It is a reason why face to face meetings have advantages. The skilled users can work with proper technical equipment efficiently also in distributed environment.
460
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
The structure of the workshop record has to be clearly defined (for example FMEA form). Team members have access rights to update records or at least to add comments. If the results are integrated with other documents it makes later updates more transparent and tracking of output activities is easier. The information support has an important role. All related documents and information that is needed in the workshop has to be accessible in a transparent and user friendly way. The PLM database is a meeting point where team members can get and upload product data. In this way they can work more efficiently. It is an advantage if the PLM database enables advanced searching tools between documents and information in real time during workshop. The level of the information system support is defined also with the way of integration of the workshop outputs into other documents and databases.
2. Workshop concurrent engineering (CE) assessment criteria The seven key criteria that define the level of CE in the product development process were recognised [3]. The same criteria can be used for assessment of the virtual team workshops. The CE models from literature [5] were compared with specific requests at virtual teams and known CE assessment models [6, 7]. The authors have tested and supplemented the CE criteria during several PLM application project, process analyses and virtual workshop practising. The assessment criteria are presented bellow first in general form, later are applied to workshop case studies. The recognised key criteria for CE [5] are: 1. Interaction with customers (sales, distribution) 2. Involvement of suppliers (supply chain) 3. Communication (human interaction) 4. Team formation (different skills, all skills involved) 5. Process definition (workflow) 6. Organisation (soft organisation) 7. Information system (interoperability, dynamic structures)
3. FMEA workshop in a virtual team Failure Mode and Effect Analysis (FMEA) is a methodology that helps identify the activities that are potential risks in the introduction of a new product, process or service. The FMEA is one of the most basic requirements of QS-9000 [8]. FMEA is a key document that forces the development team to analyse the new product design or process in a structured way. In the distributed environment FMEA has to be conducted with additional care. We believe that FMEA workshop can stimulate interdisciplinary team and guide the team work at the product improvement process. The FMEA form is guiding team at micro level [8]. For each component / operation needs to be defined failure modes, effects of failure, cause mechanisms and controls. The assessment of fault severity, probability of occurrence and detectability is done in the next step. In the more complex cases it is better to separate the assessment of failure modes into additional meeting. We have analysed execution of FMEA workshop with the seven key criteria that define the level of CE. The FMEA workshop guidelines have been tested at the automotive system supplier.
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
461
3.1. Interaction with customers It is an advantage if a customer can be involved in the FMEA team especially at the introduction meeting or at so called system FMEA. It is a must that the customers’ requests are well defined and formally written down. The customers’ requests have to be well understood to all FMEA team members. It is recommended that the team members participate actively at collection of the customers’ requests. The customers’ requests have to be presented and discussed in FMEA team at the introduction meeting. 1
Interaction with customers
2
Involvement of suppliers
1.1
Written specification of customer’s requests (input for the workshop) Workshop team members are in a direct contact with of customers Customer is directly involved at workshop Communication
2.1 2.2 2.3
Suppliers are selected as long term strategic partners Established information connection for document exchange with suppliers Active participation of suppliers in virtual workshop
4
Team formation
3.1 3.2 3.3
Skilled moderator is available There are established communication rules and time tables Infrastructure for communication with external team members is ready
4.1
5
Process definition
6
Multidisciplinary core team with specific product knowledge is available Team members have workshop specific skills and knowledge on Q-methods External team members are well integrated and character compatibility is checked Organisation
5.1 5.2
Workshop phases with inputs and outputs are defined Workshop is well understood and practiced by team’s members Workshop is integrated into related processes (process development, Q-documents management)
6.1
1.2 1.3 3
5.3
4.2 4.3
6.2 6.3
Company organisation supports interdisciplinary team formation Team has good conditions for workshop execution (no disturbance, technical support, enough time) Organisation supports integration with external teams (with formal agreement, technical support)
7
Information system
7.1 7.2 7.3
Definition of a formal workshop document that is transparent and easy accessible to team members There exist a database (PLM) that enable searching for specific input data for the workshop in real time Integration of workshop records with other documents (control plan, list of activities, customers complaints)
Figure 3. Virtual team workshop maturity assessment criteria
3.2. Involvement of suppliers Early involvement of suppliers is a key request for CE. There is an open question do we want that the supplier participate in the whole FMEA, do we want to share all specific knowledge. FMEA workshop can be split into several sections. Some of the FMEA workshops with the supplier’s participation can be focused to supplied components and integration into the whole product. Small improvements can have a significant influence on product or process robustness. The knowledge of suppliers has to be brought into FMEA core team by selected engineers who work close with suppliers. 3.3. Communications Open communication defines creativity level in a team. The role of the FMEA workshop moderator is very important to guide the workshops through planned phases, establish creative climate and to enable each team member to express his ideas. The communication infrastructure has to enable smooth communication through all channels: high resolution graphics, audio and videoconference at all team locations. It is recommended to split presentation or computer screen and video. That means computer enables presentation of 3D models with full resolution. Video system has to enable a detailed presentation of discussed objects. It is expected that team members has skills for using communication tools and common technical language [9].
462
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
3.4. Team formation FMEA team needs to have interdisciplinary knowledge on the end-user requests, design, manufacturing and assembly process, tooling, service and disposal after use. Team members need to have complementary specific knowledge in also some general knowledge that enables co-operation [4], [14]. The moderator must ensure clarity of the product requests, building of trust in the initial phases of the FMEA workshop, as well as encourage communication. The team members need beside specific knowledge on product, skills on FMEA method, other quality tools and awareness on how important is FMEA [9]. Compatible characters of the team’s members are an advantage. It is recommended that the product development project manager is responsible for FMEA. He can authorise a specialist to execute some activities (like moderation or recording), but the responsibility has to stay at the project manager. 3.5. Process definition (workflow) FMEA workshop has to have a clear structure that is obvious to all team members. Product / process analyses consist from several workshops that take from 3 to 4 hours (Figure 4A). Time schedule of workshops need to be consistent with product development process [12], [13]. At more complex products or processes FMEA team can be split into several sub-teams. There has to be good cross Sub-teams communication. Structure of a single FMEA workshop is presented in Figure 4B. Product / process requests have to be presented at the beginning. It is important to split the workshop into phases: searching for fault modes, fault mechanisms, solutions and assessment of solutions. Additional methods like 5xWHY or Ishikawa can be helpful. There is a clear procedure on how to take decisions if there is a disagreement inside the team. It is clear where product / process data is accessible and how to do records. Time of the team has to be dedicated to creative dialog and not watching how one of members is doing records. It is a good practice to do basic records in real time. Detailed records are done immediately after meeting by moderator or a selected person. All team members have to be asked to approve or supplement the FMEA records. 3.6. Organisation An organisation has to support a consistent execution of the FMEA workshops. There is clear procedure on how to convene the workshop and inform the team members. The project leader has to be able to assure attendance of the needed external experts. One option is to determine days in the organisation that are intended for FMEA workshops. The organisation has to guaranty execution of corrective actions that were determined at FMEA. The workshop has to be executed with attendance of all team members in concentrated way. That means it should not be disturbed with urgent mobile calls or emails. It is an advantage if a FMEA team can isolated from other everyday activities. 3.7. Information system FMEA is a structured record of product / process knowledge. On one side it has to be kept safe because of the importance of specific knowledge and it has to be easy accessible for re-use. There is an advantage if FMEA is kept in a database that enables advanced search tools. For example online search according to the specific failure mode through all FMEA forms. Related documents like product 3D model, process layout and failure modes have to be accessible. At invitation for FMEA workshop has to include links to all needed documents. The main output from FMEA workshop are corrective activities that has to be implemented. It is an advantage if supporting software enables tracking of activities for each team members.
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation FMEA Planning
Moderator
Moderator
Extended team
1. FMEA workshop Conceptual level
Team A
2.1 Workshop Body design A1
Team B
2.2 Workshop Body design B1
Team A
3.1 Workshop Detailed design I
Team B
3.2 Workshop Detailed design I
Team A
4.1 Workshop Detailed design II
Team B
4.2 Workshop Detailed design II
Extended team
5. FMEA workshop Overview level
Team at workshop
463
Workshop Planning, Collecting of reference documents Introduction Brainstorming Creative dialog Real time searching Solution forming (5xWHY, Ishikawa) Solution Assesment Real time recording
Moderator
FMEA recording
Team members individualy
Commenting on FMEA records
Figure 4. A - Overall FMEA process definition; B – Single workshop structure.
The maturity level of FMEA workshop can be assessed by the criteria for virtual workshops that are presented in Figure 3. FMEA specific requests presented in section from 3.1 to 3.7 has to be considered. The maturity level criteria in Figure 3 are in the same time the workshop reference model, the target is to fulfil all criteria.
4. Customer’s complaint workshop The customers’ complaint workshop is not a typical workshop during the new product development process. But it is an indispensable part of the product life cycle therefore it has a special importance. The key requests are fast response, finding of root causes and avoiding of repetition of failure mode. In the automotive industry customers specify all details about the response deadlines and contents of 8D reports. In this paper the focus is on a customer’s complain workshop execution. Customers’ complaints are coming unplanned. The interdisciplinary team has to be setup in a short time, typically in between 12 to 48 hours. It is an important advantage if the team members are familiar with the products; the best option is if they had participated in the product development. It should be defined who is receiving complaints and who define 8D team. A recommended practice is that this is the responsibility of the quality manager (Figure 5). The customer’s complained workshop is analysed with seven key criteria that define level of CE. The specific request has to be used together with the general maturity assessment for the virtual workshop (Figure 3). 4.1. Interaction with customers It is important to have open and trustful relation with the customer. The 8D team has to get all relevant information from the customer. The expected deadlines and prompt feedback to the customer has to be assured. The typically first response on short term corrective actions has to be defined in 24 hours. For each complaint the contact persons on both sides has to be defined. The root causes and corrective actions have to be reported to the customer typically in a two week time. The corrective actions have to be convincing and implemented in time.
464
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
4.2. Involvement of suppliers The sub-suppliers are often the root cause of the non-conformity. It is important to have long term relationship and immediate response on request for participation in the 8D team. The pre-requests are the contact persons on both sides and established way for reliable and fast documents and data exchange. 4.3. Communications Inside the team and organisation has to be open and constructive communication. Discussions have to be focused into searching for root cause mechanisms and long term solution and not searching for the guilty person. The proposal for guided communication during the second workshop: 1 – Problem presentation (5 minutes) 2 – Presentation of already done analyses (25 minutes) 3 – Brainstorming on root causes (30 minutes) 4 – Recording of fault causes (5 minutes) 5 – Decision making on the primary root cause (10 minutes) Critical can be decision making. A dominant person can push forward his root cause and it can generate personal conflicts. The solution can be the voting system where each team member can equally participate. The final decision is impersonal and therefore more acceptable for everyone. In the second phase of the workshop searching for solutions on the base of recognised root causes is following. The sequence of activities during the workshop can be similar to the first phase. The face to face meeting has its advantages. At temporary 8D teams specialists should come together that are already involved in new projects. The virtual teams are therefore the only option. The skilled team members and video conferencing infrastructure is a prerequisite for efficient work. 4.4. Team formation The established team has to consist of people who can contribute at solving the problem and implementing a solution. The team members have to be familiar with product, manufacturing and logistics processes. It is an advantage if they had been involved in the product development process because it enables smooth transfer of knowledge through product life cycle. At smaller enterprises it is not possible to have permanent teams to work on customers’ complaints. The 8D team is setup temporally. Q-planners with responsibility for team moderation and overall customer complaint coordination are a good practice. 4.5. Process definition (workflow) The process is in general defined with 8D report steps. The first workshop is coming after team formation (Figure 5). Important is prompt response and definition of short term corrective actions. The team has to come together as soon as possible and check what is happening in the production, is there a need for checking parts in the storehouse or at the customer. The second workshop has to be planned and organised in a systematic way; typically in one week time. The goal is to find root cause mechanism in the first phase and corrective actions in the second. The structure of the second workshop can be similar to FMEA workshop presented in Figure 4B. Additional methods like 5xWHY or Ishikawa diagram can stimulate systematic searching. After testing of corrective actions the team meets third time. Decision on implementation in serial production has to be taken. In the next phase new knowledge is transferred as preventive actions to similar products or processes.
465
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
1
Quality manager
Team formation
3
Short term corrective actions
4
Root course mechanism
5
Corrective actions
Efficiency of corrective actions
7
Preventive correction actions
8
Dissimination, award to team
Second workshop
Non-conformity description
First workshop
6 2
Third workshop
4.6. Organisation The enterprise has to have a balance between the new product development projects and the support of existing manufacturing processes. The organisation has to put priority to the customers’ complaints. The product / process specialists and external team members have to be available to temporally teams on request. If the experts are already involved into the new product development projects than it can be risk at achieving the new projects milestones. The bigger enterprises can split staff into the group for new projects and into the group for support of existing manufacturing processes. In the last case the transfer of the new product into serial production has to be executed with additional care that specific knowledge from R&D process is not lost for manufacturing. The 8D team needs support at implementation phase of corrective actions at prototyping, testing and tooling. The support increase team efficiency.
Quality manager
Figure 5. 8D report procedure.
4.7. Information system The customers’ complaints procedure is a typical process that can be well supported with workflow. The appropriate software solution can accelerate work. All team members need related information in each phase of the workshop. They are invited with an e-mail with a link to key documents. Integration of the customers’ complaint processes to other processes can significantly improve productivity and process robustness. Few examples: the new fault mode is of key importance for the product / process developers. FMEA document has to be updated with new findings. Updated FMEA is then the source of knowledge for the next generation of the product. It is an advantage if searching tools enable context specific searching through all FMEA documents / database. Integrated database for activities, costs and material handling is additional tool at workshop data tracking. Important is also link to updated quality plan. An example of the customers’ complaint IT solution from a system supplier Iskra Mehanizmi in automotive industry: 8D form is implemented in the document system Lotus Notes. Each file is stored once only on the server. It is possible to interconnect different related documents - from 8D report are documents accessed with a mouse click: received customer’s complaint, a product drawing, 3D model, additional tests report and material master data. The activities from all 8D reports can be summarised for each person in a special view. The related costs to the complaint are reported automatically through a connection to ERP system. The PPM (parts per million) report is updated automatically by using information on number of non-conforming products. The presented application for customers’ complaints includes also workflow. The
466
J. Tavˇcar and J. Duhovnik / Tools and Methods Stimulate Virtual Team Co-Operation
designated person receives an e-mail with links according to activity status and the 8D report phase. Such application is a must for efficient work in the virtual teams. All team members are well informed even if they miss one of the meetings. There are several commercial software solutions with the described functionality. The added value of the application is a seamless integration with related processes.
5. Conclusions The virtual workshops have been recognised as the key interdisciplinary team meeting point and as a source of creativity during the product development process. The workshops have to be conducted in a proper way. The generalized model of the virtual team workshop has been set up. The model with the seven CE criteria for assessment of the virtual workshop maturity level was created. The assessment model was applied to FMEA and to customer’s complaint workshop. The case study helps to recognise the key criteria at different kinds of workshops. The authors believe that the presented generalised model for the virtual team workshop can be applied to the other kinds of workshops.
References [1] Alex H.B. Duffy (1998), The Design Productivity Debate, Springer-Verlag, London. [2] Zadnik Ž, Karakašić M, Kljajin M, Duhovnik J (2009) Function and functionality in the conceptual design process. Stroj. vestn., 2009, vol. 55, no. 7/8, pp 455-471. [3] Jožef Duhovnik, Jože Tavčar, Concurrent Engineering in Machinery, CE Handbook, Ed. J. Stjepandić, Springer 2014. [4] L. Rihar, Lidija, J. Kušar, J. Duhovnik, M. Starbek. Teamwork as a precondition for simultaneous product realization. Concurr. eng. res. appl., Dec. 2010, vol. 18, no. 4, str. 261-273, ilustr., doi: 10.1177/1063293X10389789. [5] Prasad B (1996) Concurrent Engineering Fundamentals, Vol. I Integrated product and process organization, Technomic, Lancaster. [6] Lawson M, Karandikart H M (1994) A survey of concurrent engineering, Concurrent Engineering Research and Applications, pp. 1–6. [7] Ainscough M, Neailey K, Charles Tennant C (2003) A self-assessment tool for implementing concurrent engineering through change management, Internatinal journal of project management, Vol. 21, Issue 6, August 2003, pp 425–431. [8] QS 9000, Potential Failure Mode and Effects Analysis – FMEA, Guidebook, fourth edition, Daimler Chrysler Corporation, Ford Motor company, General Motors Corporation. [9] Tavčar J, Žavbi R, Verlinden J, Duhovnik J (2005) Skills for effective communication and work in global product development teams. J. eng. des. (Print). [Print ed.], 2005, Vol. 16, No. 6, pp. 557-576. http://www.tandf.co.uk/journals. [10] Zadnik Ž, Karakašić M, Kljajin M, Duhovnik J (2009) Function and functionality in the conceptual design process. Stroj. vestn., 2009, vol. 55, no. 7/8, pp 455-471. [11] Duhovnik J, Žargi U, Kušar J, Starbek, M (2009) Project-driven concurrent product development. Concurr. eng. res. appl., Sep. 2009, vol. 17, no 3, pp 225-236. [12] DUHOVNIK, Jože, ŽARGI, Urban, KUŠAR, Janez, STARBEK, Marko. Project-driven concurrent product development. Concurr. eng. res. appl., Sep. 2009, vol. 17, no 3, str. 225-236, doi: 10.1177/1063293X09343823. [13] TAVČAR, Jože, DUHOVNIK, Jože. Engineering change management in individual and mass production. Robot. comput.-integr. manuf.. [Print ed.], 2005, letn. 21, št. 3, str. 205-215. Http://www.sciencedirect.com/science/journal/07365845. [14] KUŠAR, Janez, DUHOVNIK, Jože, GRUM, Janez, STARBEK, Marko. How to reduce new product development time. Robot. comput.-integr. manuf.. [Print ed.], 2004, letn. 20, št. 1, str. 1-15. Http://www.sciencedirect.com/science/journal/07365845.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-467
467
Educating for Transcultural Design Derrick TATE Xi’an Jiaotong-Liverpool University, Suzhou, China
Abstract. Design broadly defined deals with mapping from what individuals in society want or need to means for satisfying these needs. Thus, an appropriate basis for product design deals with user-centered product and systems design for society in response to social, cultural, and technical context. A new generation of industrial and product designers needs to be educated who are able to understand and participate in the world’s ongoing social, cultural, and economic transformation through achieving a balance of creative and technical knowledge and competencies. Many different disciplines have a role to play in developing products, processes, systems, and technologies, and industrial designers should be product integrators and bridge builders between disciplines and cultures. Throughout the design process, cultural knowledge has a large role to play in the ultimate success of products and systems across many dimensions. This paper presents educational strategies for cultural awareness, understanding, and adaptation as applied to the creation, innovation, and development of products and technologies. This paper describes the context of recent approaches to integrating design within universities and focuses on the development of the new Bachelor of Engineering program in the Department of Industrial Design at Xi’an Jiaotong-Liverpool University. Keywords. Transcultural Design, Design Thinking, Design-in-China, Liberal Arts
Introduction The Department of Industrial Design at Xi’an Jiaotong-Liverpool University (XJTLU) focuses on user-centered product and systems design for society in response to social, cultural, and technical context. Growth in China will redefine, for future generations, patterns of consumption, production, and cultural appropriation domestically and globally. XJTLU’s Department of Industrial Design aims to educate a new generation of industrial and product designers who able to understand and participate in China’s ongoing social, cultural, and economic transformation through achieving a balance of creative and technical knowledge and competencies. This paper is structured as follows. Section 1 presents background material on the importance of design and innovation, the vision and creation of Xi’an JiaotongLiverpool University, recent efforts to expand the role of design in universities, and the place of design at XJTLU. Section 2 presents the Department of Industrial Design at XJTLU, its vision and mission, its distinctives, the Bachelor of Engineering (BEng) program, and educational strategies for cultural awareness, understanding, and adaptation within the BEng curriculum. Section 3 presents some preliminary observations and suggestions for future work. 1. Background 1.1. Importance of Design and Innovation Technology development and the career paths of engineering and design graduates
468
D. Tate / Educating for Transcultural Design
have been changing due to globalization, yet, engineering education has remained substantially unchanged since the 1950s—for example, the current structure of undergraduate engineering education in the US was codified in the Grinter report of 1955 in response to cold-war concerns about science [13; 19; 21], and the role of design within this curriculum structure has been minimal. In particular, within academia there was a shift towards “engineering science” subjects that “downgraded technology innovation, design, manufacturing, and other related fields” [33], even to the point that “the education system has treated engineering as synonymous with engineering science” [32]. The future of engineering education and the careers of engineering graduates needs to move away from engineering science, the “left-brain, digitized analytical work associated with knowledge” that is being commoditized and towards “creativity, imagination, and, above all, innovation” [29]. Innovation is the broad activity that refers to the entire process by which technological change is deployed in commercial products, and it includes not only the physical realization of a novel idea, that is, invention, but also its acceptance and its application in practice [6; 17]. To be a successful designer, a student must learn to draw upon ideas from many disciplines and cultures, combine them in new ways, and build new bodies of knowledge [36] analogously to the way that technology develops through combining existing technologies and “harnessing of natural phenomena” that are “captured and put to use” [1]. 1.2. Design and its Role in the University Design as a fundamental human activity Design is a fundamental human activity, cf. [4; 5; 49]. As Zeng states, “Intuitively, design is an activity that aims to change an existing environment to a desired one by creating a new artifact into the existing environment. The artifact must adapt to the goals and requirements of humans while obeying laws and rules existing in the environment from which it can never be separated” [50]. Formally, design is the process of developing or selecting means to satisfy objectives, subject to constraints [35], or simply mapping from what to how [32]. Considering the way that design “contributes to society by satisfying its needs and aspirations” [32], the importance of liberal arts to design has been articulated through statements such as those of Jobs: “It’s in Apple’s DNA that technology alone is not enough. That it’s technology married with liberal arts, married with the humanities that yields us the result that makes our hearts sing.” [39] Some researchers argue that design itself should be recognized a liberal art that contributes to modern culture [27]. Buchanan notes the prevalence of design in modern society: “There is no area of contemporary life where design—the plan, the project, the working hypothesis which constitutes the ‘intention’ in intentional operations—is not a significant factor in shaping human experience” [5]. It is the humanities and cultural capital of a society that provide the tools for students to search for answers to the question of life’s purpose and value [23]. The humanities should “providing instruction in the ends of human life” [2] and help students to find and articulate a “philosophy of life”, a “consistent set of ideas about what to value and strive for in life” [8]. As Bowles observed, “Historically, when there is a concern that workers are becoming overspecialized in their tasks…the liberal arts reemerge as the one best way to intellectually broaden and culturally enrich the citizenry” [3]. Design is a natural counterpart to this search because it provides a framework and models for articulating goals and a process for creatively developing means for satisfying them. To become leaders, students must be able to think for themselves, act on
D. Tate / Educating for Transcultural Design
469
their convictions, discover what they believe in the course of articulating it, and “confront [questions] directly, honestly, courageously” [11]. Design as an academic discipline Design as an academic field has grown and expanded over the past thirty years [20]. Communities have formed within professional societies, and series of conferences have been organized and held on design topics [24]. Taking one example, the Society for Design and Process Science was founded in 1995 “to foster, to identify and to extend a core of science that deals with design and processes across a broad spectrum of human, technological, and economic endeavors.” [22] The Academy of Transdisciplinary Learning and Advanced Studies (TheATLAS) carries this further by hosting bi-annual meetings on transdisciplinary, transnational, and transcultural global problems. “The transcultural [notion] designates the opening of all cultures to that which cuts across them and transcends them” [28]. Ertas stated, “Design and process are central to the concept of transdisciplinary education. Social, political and cultural aspects of problems and issues must be recognized if workable and economically feasible solutions are developed” [15]. New divisions, schools, and universities Recent efforts by leading universities have created new divisions, schools, and universities focused on design—and even sought to teach design as a core subject for all undergraduate students. For example, Singapore University of Technology and Design (SUTD) presents a vision of “Big D” design that deals with “all technically grounded design” (including architectural design, product design, software design, systems design) over “the full value chain” and includes an understanding of the liberal arts, humanities, and social sciences [26]. Although one goal for Big D design is to seek “deeper insights, improved generalizability and improved capacity for differentiating fundamental from contingent aspects of design” through considering design broadly, unfortunately in the curricula presented for SUTD, design is integrated into traditional engineering science courses, rather than clearly teaching design as a discipline with its own body of knowledge, for example, the EPD Curriculum–Mechanical Devices that is quite similar to a traditional mechanical engineering program [47]. Typical of the approach to inserting pieces of design into engineering science courses are “designiettes” (short for design vignettes or design charrettes) [26] which—along with the non-design activities of desktop experiments, hands-on demos, concept quizzes, collaborate learning activities—appear as discrete activities at specific times in one or more courses [26]. The approach of using designiettes does not have a significant cultural component, and it treats design in the oldfashioned way as an art that cannot be taught, for example, equating “art” with “practice”, aka designers are born not made, cf. [43]. As recognized by its proponents, “Design as used in the Big-D context is broad but does not include all activities that are legitimately called design” [26]. Other recent efforts to broaden teaching of design include the Engineering Systems Division at MIT, the Hasso Plattner Institute of Design—the “d.school” at Stanford, and teaching approaches at Olin College of Engineering. Design for the broad university community The Korea Advanced Institute for Science and Technology (KAIST) has conducted a bold initiative for all freshman students to study design [40; 41; 43] as part of efforts towards achieving the university’s goals and creating a campus-wide culture of design
470
D. Tate / Educating for Transcultural Design
thinking; see [34]. The implementation of the freshman design initiative at KAIST is similar to freshman, cornerstone courses taught in many engineering departments in terms of its timing in the curricula and its use of project-based learning (cf. [13; 14]), but differs in scope and learning objectives. In particular, typical cornerstone design courses include many non-design related soft skills and generic introductions to engineering. In contrast, at KAIST, the goals are to effect a deep change in the students’ thinking, view of their role in the world, and their mode of working [42]: The students are required to approach design from a creative, but conscious, rational, and systematic perspective” in which “trial-and-error and intuitive design” are penalized [43].
1. Xi’an Jiaotong-Liverpool University (XJTLU) Xi’an Jiaotong-Liverpool University is an independent institution created through the collaboration of two well-known international, research universities. It is the first and currently the only university in China to offer dual UK (University of Liverpool) and Chinese Ministry of Education (XJTLU) accredited undergraduate degrees, and all programs are delivered exclusively in English in Years 2 to 4. The first cohort of students started in 2006, and the university has grown to a current size of 7,000 students on campus across 34 undergraduate programs, 13 Master’s programs, and 10 Ph.D. programs. XJTLU seeks to educate students who will become “global citizens.” The vision of Xi’an Jiaotong-Liverpool University (XJTLU) is to create a unique international university that blends Western Best Practice and Eastern Best Practice, and XJTLU aims to “become a research-led international university in China and a Chinese university recognized internationally, with its unique features in teaching & learning, research, social service, education management” [48]. In this vision, culture and design have significant roles to play. Specific cultural initiatives at XJTLU include the creation of the XJTLU China Institute and creation of a Contemporary China Studies department and program.
2 . The Role of Design at XJTLU Within XJTLU design is seen as a strategic priority of the university. The Department of Industrial Design opened in fall of 2013/14 and is currently home to a BEng program, described below, and a Ph.D. program. Another initiative based on design within XJTLU is the creation of the Design Research Institute (DRI). The vision of DRI is to “promote crossǦdisciplinary design research as a speculative and rigorous project based form of enquiry that offers the sciences, the arts, the humanities, engineering and society at large valuable insights into processes that lead to desirable futures.” Within DRI, Design and Culture is identified as one of six primary fields of interest. In addition to the DRI, the Department of Industrial Design also works closely with other departments and institutes within XJTLU to further the design, innovation, and entrepreneurial aims of the university, e.g. the Innovation Hub with the International Business School Suzhou (IBSS) and the Research Institute for Urbanization (RIU). 2. Department of Industrial Design 2.1. Vision and Mission The vision of the Department of Industrial Design is to build XJTLU’s capability to shape the future with creativity, technical competence, and a passion for design and innovation. The department’s educational emphases rest on the principles of new
D. Tate / Educating for Transcultural Design
471
product development including aesthetic sensitivity, human-oriented responsibility, technological and engineering competence, user research as well as on social, cultural, and economic awareness, with a view to enabling people and communities to live desirable futures. As China-based companies shift to a designed-in-China strategy and international companies seek to design for the Chinese market, graduates of the Industrial Design program should have excellent career opportunities and be well positioned for leadership roles within Chinese and multinational enterprises. The department will help expand XJTLU’s leadership role of cultural influence in SIP and Jiangsu, across China, and around the world. 2.2. Goals The goals for the Department of Industrial Design at Xi’an Jiaotong-Liverpool University are as follows: Internationalization: Create a unique international department that blends Western best practices and Eastern best practices. Learning and teaching: Prepare students for a role as product integrators and bridge builders in China-based companies and international companies; Mentor students to be the entrepreneurial and educational leaders of tomorrow. Research and knowledge exchange: Create a science base and tools for design innovation for the next 20 years; Work with enterprises and entrepreneurs to assess novel design concepts and to partner with them in research, design, development, and commercialization. Service to society: Broaden participation in innovation activities through providing members of society with the design tools and services needed to assess and realize novel design concepts. Building capacity: Build a world-class department. 2.3. Distinctives The three distinctives of the BEng Industrial Design program at XJTLU are developing products to meet user and social needs through 1. A transdisciplinary approach that prepares students for a role as product integrators by balancing, within each semester, technical (engineering) courses with the arts, creativity, humanities, social science courses 2. A cross-cultural perspective in which students are educated to bridge cultural differences between China and the rest of the world 3. Presents design as a coherent academic discipline that provides a foundation for students to conceptualize design thinking as a multi-layered mapping from user and social needs that are clearly articulated to well-justified solutions Culture is integral to these distinctives in the integration of humanities and social science (concepts, theories, and methods) with design and engineering and in the roles that the students will take up in their future careers. 2.4. Scope and Design Process Model The scope of design covered by the BEng program is the product development process starting with project planning, technology strategy, and customer/user research and extending directly and indirectly into testing, refinement, production ramp-up, and end of life [7; 45]. This scope is similar to the Conceive-Design-Implement-Operate (CDIO) Initiative [9].
472
D. Tate / Educating for Transcultural Design
The program draws upon both phase-based and activity-based models of the design process [16; 37; 38]. Both types of models are useful for contextualizing the students’ design activities within product development processes in industry, particular design decision making, and the choice among design theories, models, and methods, cf. [31]. 2.5. Design Thinking A specific set of skills that the BEng program is designed to instill in the graduates of the program may be termed as design thinking. There is a rich body of literature that attempts to lay out the desired skills of “design thinking”. This set of skills includes tolerating ambiguity, viewing from a systems perspective, dealing with uncertainty, using estimates, simulations, and experiments to make effective decisions, and using specific languages of design for communication [12; 13]. This distinct “designerly” form of thinking and knowing is fundamentally different than approaches used by experts in other fields [10]. In our case, the emphasis in design thinking means that the student must learn to connect his or her decisions to user and social needs. This means that a series of “why” questions about the design artifact ultimately trace their source back to customer and social needs or context. This also means that design exploration has its starting point and fulfillment in meeting needs better than existing designs [25; 30], meeting previously unmet needs, or even anticipating unknown or latent [46] needs. Consider a situation in which graduates are hired by a company or organization, and they need to learn about an unfamiliar, but unspecified culture or subculture. What tools and models can they use to understand the values, practices/interaction rituals, assumptions of that culture? Thinking about culture must allow the students to deal with bigger-picture issues than done in typical engineering subjects, which should be important and interesting, but perhaps challenging for the students. Students must learn to wrestle with important questions and end up with some tools they can use in the intercultural contexts and roles they end up in. 2.6. Pedagogy Design thinking and methods can be applied to all types of design, and teaching methods for design include a wide variety of activities: lectures, case studies, in-class exercises and discussions, project work, and formative and summative assessment and feedback. The BEng program develops the students’ skills through coursework and exams, studio projects, collaborative projects with industry, and individual design research or development projects. The students will be able to aim for careers that are more artistically focused or technically focused, depending on their interest, or pursue advanced degrees in either area. In design literature, a distinction is made between a distinction between projects that are oriented on “design” or “problem”. In educational literature, the two pedagogical approaches for open-ended projects are contrasted as tasks are problem-based learning and project-based learning. For many of the projects, the students will encounter in their design studios and other courses, the problem-based paradigm applies [44]: “It is intended to provide authentic experiences that foster active learning, support knowledge construction, and naturally integrate school learning and real life. Students are provided a carefully selected scenario and are tasked with identifying the root problem and the conditions needed for a good solution while acting as self-directed learners working with teachers as problem-solving colleagues.” However, the project-based learning approach will also be relevant, particularly for the students’ final year projects
D. Tate / Educating for Transcultural Design
473
(FYP) [18]: “In project-based learning the students undertake projects that consist of and extended inquiry into various aspects of a real-world topic.” 2.7. BEng Program Students who graduate from the program will be equipped with skills and experiences in user research, artistic expression, design visualization, creative design, social and cultural aspirations and identities, material culture, engineering analysis, industrial production, business strategy, interdisciplinary collaboration, and social and environmental responsibility. 2.7.1. Themes of Years and Semesters Pedagogically the program progresses from a focus on concrete design objects in Years 1 and 2, to more abstraction in Year 3, to bigger picture contextual issues and interconnections in Year 4. Specifically the themes of the program semesters are organized as follows: Year 1 Theme: Introduction and Design Process; Year 2/S1Theme: Understanding Design Artifacts; Year 2/S2 Theme: Design Modeling and Prototyping; Year 3/S1 Theme: Creativity and Conceptualization; Year 3/S2 Theme: Evaluation; Year 4 Theme: Questioning Assumptions and Design Implications. 2.7.2. Program Structure Figure 1 shows the current draft of the BEng Industrial Design program structure from Year 1 to 4. The vertical axis shows progression from Semester 1 (Fall) to Semester 2 (Spring) and from Year 1 to Year 4. The horizontal axis in the figure shows the relationship of different disciplines to the program. Specifically, the courses are grouped into five disciplines: Core Math and Science, Engineering, Humanities, Social Science, and Design. Each semester is intended to balance the technical, engineering courses with the arts, creativity, humanities, and social science courses, and the industrial design courses within discipline follow a logical progression, and within a semester better relate to each other. The knowledge and skills the students are learning in their disciplinary courses are applied to open-ended design projects each semester in a series of design studios. 2.7.3. Progression of Design Studios Table 1 shows the progression of design studios through the BEng Industrial Design program. The columns show the specific skill sets used: technical, humanistic, social, and design as well as the studio skills, format, assessment deliverables. 3. Results Because the BEng program started in Aug. 2013, only one semester of Year 2 has been completed, and there is very little concrete data upon which to evaluate the students’ learning outcomes and affective responses. Nevertheless, some preliminary observations will be made based on personal observations, informal discussions with students, and results of the limited student evaluations. In general some of the same challenges noted by Thompson in teaching design in Korea have been seen [41]: a desire for model assignments to follow, a focus on grades,
474
D. Tate / Educating for Transcultural Design
Figure 1. Draft BEng Industrial Design Program Structure (as of May 2014, some details of Year 1 omitted) Table 1. Progression of Design Studios in BEng Program
D. Tate / Educating for Transcultural Design
475
perceptions of a heavy workload, questions about the appropriateness of teamwork, and uncertainty about the meaning of design (versus creativity). On the other hand, the students appreciate the chance to realize their “dreams of design” which, as described by different students, have included inventing new products, the work environment of designers, the lifestyle represented by designers, or meeting people’s needs. In the studios taught thus far, the students have expressed positive responses to the hands-on activity of making prototypes, and the face-to-face discussions with their project tutor. 4. Conclusion Design broadly defined deals with mapping from what individuals in society want or need to means for satisfying these needs. Thus, an appropriate basis for product design deals with user-centered product and systems design for society in response to social, cultural, and technical context. The paper has described the importance of design and innovation, the vision and creation of Xi’an Jiaotong-Liverpool University, recent efforts to expand the role of design in universities, and the place of design at XJTLU. The main focus of the paper has been the new Bachelor of Engineering program in the Department of Industrial Design at Xi’an Jiaotong-Liverpool University and the paper has presented its vision and mission, its distinctives, the BEng program structure, and educational strategies for cultural awareness, understanding, and adaptation. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
[17] [18] [19]
W.B. Arthur, The Nature of Technology, Free Press, New York, NY, 2009. P. Berkowitz, What is a University For?, in: policy review, 2007. M.D. Bowles, The Organization Man Goes to College: AT&T’s Experiment in Humanistic Education, 1953-1960, The Historian 61 (2007), 15-32. C.A. Brown, Elements of Axiomatic Design: a simple and practical approach to engineering design, in, 2006. R. Buchanan, Wicked Problems in Design Thinking, Design Issues 8 (1992), 5-21. H. Chesbrough, W. Vanhaverbeke, and J. West, eds., Open Innovation: Researching a New Paradigm, Oxford University Press, Oxford, UK, 2006. D. Clausing, Total Quality Development, ASME Press, New York, 1994. J.M. Cooper, Pursuits of Wisdom: Six Ways of Life in Ancient Philosophy from Socrates to Plotinus, Princeton University Press, Princeton, NJ, 2012. E.F. Crawley, The CDIO Syllabus: A Statement of Goals for Undergraduate Engineering Education, in, Massachusetts Institute of Technology, Cambridge, MA, 2001. N. Cross, Expertise in design: an overview, Design Studies 25 (2004), 427-441. W. Deresiewicz, Solitude and Leadership, in: American Scholar, 2010. K. Dorst, Creating Design Expertise (Keynote talk), in: ConnectED 2007 International Conference On Design Education, University Of New South Wales, Sydney, Australia, 2007. C.L. Dym, A.M. Agogino, O. Eris, D.D. Frey, and L.J. Leifer, Engineering Design Thinking, Teaching, and Learning, Journal of Engineering Education 94 (2005), 103-120. C.L. Dym, M.M. Gilkeson, and J.R. Phillips, Engineering Design at Harvey Mudd College: Innovation Institutionalized, Lessons Learned, Journal of Mechanical Design 134 (2012). A. Ertas, Foreword in: The ATLAS Transdisciplinary-Transnational-Transcultural Bi-Annual Meeting, TheATLAS Publications, Georgetown, TX, 2010, p. iv. N.F.O. Evbuomwan, S. Sivaloganathan, and A. Jebb, A Survey of Design Philosophies, Models, Methods and Systems, Proceedings IMechE Part B: Journal of Engineering Manufacture 210 (1996), 301-320. J. Fagerberg, D.C. Mowery, and R.R. Nelson, eds., The Oxford Handbook of Innovation, Oxford University Press, Oxford, UK, 2005. M. Frank, I. Lavy, and D. Elata, Implementing the Project-Based Learning Approach in an Academic Engineering Course, International Journal of Technology and Design Education 13 (2003), 273-288. L.E. Grinter (Chairman), Summary of the Report on Evaluation of Engineering Education, Journal of Engineering Education (1955), 25-60.
476
D. Tate / Educating for Transcultural Design
[20] A.O. Ilhan, The Growth of the Design Disciplines in the United States 1984–2010, Ph.D., Washington State University, 2013. [21] R.R. Kline, The Paradox of 'Engineering Science' A Cold War Debate about Education in the U.S., IEEE Technology and Society Magazine (2000), 19-25. [22] G. Kozmetsky, Technology Keynote Address, in: First World Conference on Integrated Design and Process, SDPS, Austin, TX, 1995. [23] A.T. Kronman, Education's End: Why Our Colleges and Universities Have Given Up on the Meaning of Life, Yale University Press, New Haven, CT, 2007. [24] W.D. Lawson and D. Tate, Redesign of a Civil Engineering Capstone Design Course using Constrained, Adaptive Project Assignments, in: First International Workshop on Design in Civil and Environmental Engineering (DCEE), KAIST, Daejeon, Korea, 2011. [25] J.H. Lienhard, How Invention Begins, Oxford University Press, New York, NY, 2006. [26] C.L. Magee, K.L. Wood, D.D. Frey, and D. Moreno, Advancing Design Research: A “Big-D” Design Perspective, in: ICORD 13: International Conference on Research into Design, Indian Institute of Technology, Madras, Chennai, 2013. [27] I. Mariş, On design as liberal art: The art of advancements, University of Amsterdam., 2014. [28] B. Nicolescu, Methodology of Transdisciplinarity – Levels of Reality, Logic of the Included Middle and Complexity, in: The ATLAS Transdisciplinary-Transnational-Transcultural Bi-Annual Meeting, TheATLAS Publications, Georgetown, TX, 2010, pp. 1-14. [29] B. Nussbaum, R. Berner, and D. Brady, Get Creative! How to build innovative companies, in: Business Week, 2005. [30] H. Petroski, The Evolution Of Useful Things, Vintage, New York, NY, 1992. [31] S.K. Sim and A.H.B. Duffy, Towards an Ontology of Generic Engineering Design Activities, Research in Engineering Design 14 (2003), 200-223. [32] N.P. Suh, The Principles of Design, Oxford University Press, New York, 1990. [33] N.P. Suh, Complexity: Theory and Applications, Oxford University Press, New York, 2005. [34] N.P. Suh, Inaugural Speech, in, Daejeon, S. Korea, 2006. [35] D. Tate, A Roadmap for Decomposition: Activities, Theories, and Tools for System Design, Ph.D. Thesis, MIT, 1999. [36] D. Tate, Designing Transdisciplinary Discovery and Innovation: Models and Tools for Dynamic Knowledge Integration, Transdisciplinary Journal of Engineering and Science 1 (2010), 105-124. [37] D. Tate, J. Chandler, A.D. Fontenot, and S. Talkmitt, Matching Pedagogical Intent with Engineering Design Process Models for Pre-College Education, Artificial Intelligence for Engineering Design, Analysis and Manufacturing 24 (2010). [38] D. Tate and M. Nordlund, A Design Process Roadmap as a General Tool for Structuring and Supporting Design Activities, SDPS Journal of Integrated Design and Process Science 2 (1998), 11-19. [39] B. Thompson, Whither Liberal Arts?, in: Stratechery, 2013. [40] M.K. Thompson, Green Design in Cornerstone Courses at KAIST: Theory and Practice, International Journal of Engineering Education 26, 359-365. [41] M.K. Thompson, ED100: Shifting Paradigms in Design Education and Student Thinking at KAIST (invited paper), in: 19th CIRP Design Conference – Competitive Design, Cranfield University, 2009, p. 568 [42] M.K. Thompson, Increasing the Rigor of Freshman Design Education, in: International Association of Societies of Design Research Conference (IASDR 2009), 2009. [43] M.K. Thompson, Teaching Axiomatic Design in the Freshman Year: A Case Study at KAIST, in: Fifth International Conference on Axiomatic Design, Campus de Caparica, 2009. [44] L. Torp and S. Sage, From Problems as Possibilities: Problem-Based Learning for K-16 Education, Association of Supervision and Curriculum Development, Alexandria, VA, 2002. [45] K.T. Ulrich and S.D. Eppinger, Product Design and Development, McGraw-Hill, New York, 2004. [46] E. von Hippel, Sources of Innovation, Oxford University Press, New York, NY, 1998. [47] K.L. Wood, R.E. Mohan, S. Kaijima, S. Dritsas, D.D. Frey, C.K. White, D.D. Jensen, and R.H. Crawford, A Symphony of Designiettes Exploring the Boundaries of Design Thinking in Engineering Education, in: ASEE Annual Conference, San Antonio, TX, USA, 2012. [48] Y. Xi, a wonderful journey--to create a unique international university in, Xi’an Jiaotong-Liverpool University, Suzhou, China, 2013. [49] Y. Zeng, Axiomatic Theory of Design Modeling, Journal of Integrated Design and Process Science 6 (2002), 1-28. [50] Y. Zeng, Environment-Based Design (EBD), in: ASME 2011 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2011, Washington, DC, 2011.
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-477
477
Framework of Concurrent Design Facility for Aerospace Engineering Education Based on Cloud Computing Dajun XUa,1, Cees BIL b and Guobiao CAI a School of Astronautics, Beihang University, Beijing, China b School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University, Melbourne, Australia a
Abstract. Concurrent Design Facility (CDF) is an effective and efficient manner to implement Concurrent Engineering methodology. In aerospace engineering education, CDF is invaluable to lecturers by enabling the entire student team to gain cross-discipline skills and at the same time stay at the cutting edge of technology. Establishment of CDF is always consuming much money on hardware and software. This paper presents a low cost CDF framework which is suitable for aerospace engineering education in class room based on cloud computing. An important aspect of CDF is collaboration between multidisciplinary specialists or virtual specialists in engineering education. Collaboration in CDF requirement some dedicated hardware or software to exchange file, manage knowledge, collaborative work on writing report, and even remote communicate with other work teams in traditional means. Emergence and development of cloud computing have made above-mentioned requirements become very easy to be fulfilled. Some public cloud computing servers, , such as Goolge docs, Google Drive, OneDrive, Dropbox, Mendeley, can be used in CDF for education to save investment on hardware and software related to data, file, and information exchange. Google Talk and Skype can be used to remotely communicate with work team at other location. This CDF framework has many benefits, include low cost on hardware, software and human, reduce preparation time, and easy to deploy in classroom education. Keywords. Concurrent Design Facility, Aerospace Engineering Education
Introduction Concurrent Design Facility (CDF) is a workspace and information system allowing multidisciplinary experts working in a focused environment and conducting design collaboration. The development of CDF has a history of near 20 years from the first facility PDC was opened in 1994 [1]. Up to now, more than 20 CDFs [2]~[26] have been established around the world and they have been implemented to design aircraft, spacecraft and space mission. With the rapid developing of new technologies, aerospace industry is facing the huge challenge that how to design aircraft or spacecraft mission in a fast and low cost manner. There are many design alternatives need to be evaluated and screened. CDF 1
Corresponding Author, Mail:
[email protected]
478
D. Xu et al. / Framework of Concurrent Design Facility for Aerospace Engineering Education
based on concurrent engineering methodology is an effective and efficient approach to solve this problem. Applications of modern information systems enabled fundamental improvements to the system engineering process through the use of real time concurrent engineering. Many design teams have demonstrated dramatic savings in time and money compared with the traditional process for systems conceptual design. CDF is effective and efficient has been proven by design cases and experiences of research team which apply CDF in their work. Many industry and academic research institutes in the field of aerospace are implementing or are developing their own CDF. It is obvious that more aerospace vehicle designs and flight mission assessments will be conducted in CDF and aerospace engineering education in the CDF environment will also be a trend in many universities. This paper summarize some CDFs in universities for aerospace engineering education and based on analysis of essential requirements of a general CDF a low cost CDF framework is presented, which is suitable for aerospace engineering education in class room based on cloud computing. An important aspect of CDF is collaboration between multidisciplinary specialists or virtual specialists in the environment of engineering education. Collaboration in CDF requirement some dedicated hardware or software to exchange file, manage knowledge, collaborative work on writing report, and even remote communicate with other work teams in traditional means. Emergence and development of cloud computing have made above-mentioned requirements become very easy to be fulfilled. Some public cloud computing servers, such as Google Drive, OneDrive, Dropbox, Mendeley, can be used in CDF for education to save investment on hardware and software related to data, file, and information exchange. Google Talk and skype can be used to remotely communicate with work team at other location. This CDF framework has many benefits, include low cost on hardware, software and human, reduce preparation time, and easy to deploy in classroom education.
1. CDF for Aerospace Education University as an academic research power always stands at the leading edge of new technology. Some universities had paid attention to CDF at the beginning of it emerged, and they also have established their own CDFs to study this new design methodology for aircraft or spacecraft. These CDFs are also applied to aerospace engineering education. 1.1. Design Environment for Integrated Concurrent Engineering (DE-ICE) at MIT A teaching concurrent engineering environment can be found in the Design Environment for Integrated Concurrent Engineering (DE-ICE) at MIT. This center is 14 design stations and two projectors. PCs are not provided in the environment as each student receives a campus laptop upon entering the college. The facility is designed around two modes: design mode and teaching mode [6] . 1.2. Space System Concept Center (S2C2) at Technical University of Munich The Technical University of Munich has also developed a concurrent engineering environment as a teaching tool. Using approximately 10 user stations, the environment
D. Xu et al. / Framework of Concurrent Design Facility for Aerospace Engineering Education
479
provides students with hands on exposure with tools and methodologies used in the aerospace industry. Excel based models are used to integrate the design and MuSSat is used to allow the students to design as he or she finds the time [6]. 1.3. Laboratory for Spacecraft and Mission Design (LSMD) at California Institute of Technology The Laboratory for Spacecraft and Mission Design (LSMD) at California Institute of Technology was developed in 1999 and is modeled after JPL’s PDC. It currently houses three Macintosh and five PCs and is primarily used as a teaching tool. The LSMD uses self-developed tools to teach students about concurrent engineering design over the course of a semester. Since the design is drawn out over the course of a long period of time, little has been required in the form of automation of the processes [6]. 1.4. Space Systems Analysis Laboratory (SSAL) Concurrent Engineering Facility at Utah State University Utah has a growing interest in space system design and has, for two reasons, established a concurrent engineering environment. The first and foremost is to augment the existing space research teachings at the university. The second is to perform system level designs on space systems. They chose the PDC and CDC as models for development of an in house center and intend to team with other centers to test distributed concurrent design in the near future [13]. 1.5. The Collabrative Design Environment(CoDE) at Georgia Institute of Technology CoDE belongs to The Aerospace Systems Design Laboratory (ASDL) of Georgia Institute of Technology. The objective of CoDE is to rapidly execute collaborative design conceptualizations by fostering designers’ creativity in multidisciplinary design teams. The environment set out with two missions: “Enhance the fidelity of simulation models for design space exploration and robust design methodologies,” and “create a national asset for the development of next-generation conceptual design facilities and approaches” [14][16]. 1.6. Concurrent Design Facility at International Space University the International Space University (ISU) received its own Concurrent Design Facility (CDF) under the continued support of the European Space Agency (ESA). This facility comes to open the possibility to ISU’s students of getting to know the principles of Concurrent Engineering and its means of application. During the two years of operations of the ISU CDF, workshops and assignments for some of ISU’s programs were devised and put into practice where technical and non-technical students are exposed to the process of Space Mission Design applying Concurrent Engineering, in particular to Remote Sensing and Telecommunications spacecraft design [20].
480
D. Xu et al. / Framework of Concurrent Design Facility for Aerospace Engineering Education
2. Essential Requirements of a General CDF 2.1. Team, Hardware, and Software The paper [27] compared collaborative engineering environments that are reported in the literature with respect to three specific aspects: software, hardware, and peopleware configurations. A taxonomy was presented in it to fully describe each of the different environments. Using this taxonomy, an intersecting set of features from these environments may be used to develop future environments for customized purposes. In modern engineering, design software has taken an enormous role. These tools are now commonplace and used to communicate business, financial, and technical information. There is numerous software required or desired to operate a successful concurrent engineering environment. They include software to facilitate collaboration, support analysis, support integration, perform modeling, and to support visualization. Further, these software packages can be commercial off the shelf (COTS) items, modified COTS, and custom in house software tools. Different combinations of software are found in each CEE. Another key consideration in establishing a concurrent engineering environment is the electronic/computational hardware. The hardware serves many different functions within the environment including supporting the individual engineer/designer, servers to tie the individual hardware components together, visualization hardware, communication hardware, and individual domain specific pieces of hardware. All of these hardware items work in concert to support the concurrent engineering activities within the environment. Hardware for the individual engineer may include permanent desktop systems, mobile preconfigured systems within the CEE, and support for external mobile systems. Like the software, multiple combinations of hardware solutions are deployed at the concurrent engineering facilities around the world and no one solution stands out as the best. The final key aspect is how human beings interact with each other and the design, peopleware. Although engineering design is meant as a technical activity, it truly functions as a social activity. It was confirmed that team introductions, pooling of knowledge, and team maintenance accounts for 10-20% of design time. At the heart of concurrent engineering lie five distinct decision areas when establishing a concurrent engineering environment: the roles of the team members, definition of process, team formation strategies, who addresses conflict, and how concurrent is the operation of the environment. 2.2. Essential Requirements of a General CDF A survey of concurrent engineering environments (CEE) was presented in the paper [27] and summarized key similarities and key differences of those CEEs. The peopleware is a key aspect for CDF, but the first step to establish CDF is to prepare software and hardware. The bigger part of investment to establish CDF will be put on hardware and software, thus essential requirements of a general CDF are tabulated in Table.1. Satisfying these requirements would make CDF has basic capabilities and functions to analyze, simulate, integrate, exchange data, visualize design status and communicate with remote design center.
D. Xu et al. / Framework of Concurrent Design Facility for Aerospace Engineering Education
481
Table 1. Essential Requirements of a General CDF Essential Requirements of CDF PCs Workstation Interface for Laptops Server
Information Server
Hardware Projectors Visualization Smart Board
Software
Communication
Audio Systems
Collaboration
Commercial:
[Novell]
Analysis
Commercial:
[…. ;
Visualization
Commercial:
[Pro/E; CATIA; Solidworks]
Integration
Commercial:
[iSight; ModelCenter]
Modeling
In house tools: [Excel+VB]
in house tools]
3. A Collaborative Architecture based on Cloud Computing 3.1. About Cloud computing The term ‘Cloud Computing’ emerged in publications in the year 2009. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction [28]. 3.2. Requirements of Collaboration in CDF An important aspect of CDF is collaboration between multidisciplinary specialists or virtual specialists in engineering education. Collaboration in CDF require some dedicated hardware or software to exchange file, manage knowledge, collaborative work on writing report, and even remote communicate with other work teams in traditional means. Requirements of collaboration in CDF can be summarized as four items: document collaboration, file exchange, knowledge management, and remote communication. In CDF, Spreadsheets are usually used as a simple integrated model collecting data from each specialist and calculate performance of vehicle or system. Many files, such as CAD file, need to be sent to other specialist for flow field simulation or structure analysis. Some literatures related to current project need to be managed and classified. Sometimes remote communication is also necessary to connect with people who are at other location.
482
D. Xu et al. / Framework of Concurrent Design Facility for Aerospace Engineering Education
3.3. Collaboration based on Cloud Computing A general CDF usually are equipped with dedicated hardware and software to realize requirements mentioned above, such as information server and communication software, and those equipments will consume much funds. But now this problem will be solved by cloud computing with low cost. Table 2 show a solution for collaboration in CDF based on cloud technology. Table 2. Collaboration based on Cloud Technology Requirements of Collaboration in CDF Document Collaboration File Exchange Knowledge Management Remote Communication
Options based on Cloud Technology Google Docs, or OneDrive Dropbox, or OneDrive MENDELEY Google talk, or skype
Google Drive is a file storage and synchronization service provided by Google, released on April 24, 2012, which enables user cloud storage, file sharing and collaborative editing. Google Drive is now the home of Google Docs, a suite of productivity applications, that offer collaborative editing on documents, spreadsheets, presentations, and more [29][30]. Dropbox is a file hosting service operated by Dropbox, Inc., that offers cloud storage, file synchronization, and client software. Dropbox allows users to create a special folder on each of their computers, which Dropbox then synchronizes so that it appears to be the same folder (with the same contents) regardless of which computer is used to view it. Files placed in this folder also are accessible through a website and mobile phone applications [31]. OneDrive is also a file hosting service and has some similar function with Dropbox, but It can integrated with Microsoft Office [32]. Mendeley is a desktop and web program for managing and sharing research papers, discovering research data and collaborating online. It combines Mendeley Desktop, a PDF and reference management application (available for Windows, Mac and Linux) with Mendeley Web, an online social network for researchers. Mendeley requires the user to store all basic citation data on its servers - storing copies of documents is at the user's discretion. Upon registration, Mendeley provides the user with 2 GB of free web storage space, which is upgradeable at a very low cost [33]. Google Talk is an instant messaging service that provides both text and voice communication [34]. Skype allows users to communicate with peers by voice using a microphone, video by using a webcam, and instant messaging over the Internet. Phone calls may be placed to recipients on the traditional telephone networks. Calls to other users within the Skype service are free of charge, while calls to landline telephones and mobile phones are charged via a debit-based user account system. Skype has also become popular for its additional features, including file transfer, and videoconferencing [35] . 3.4. Benefits Applications of cloud technology in CDF environment will bring some benefits. First, the costs on hardware, software, and human are reduced, as dedicated equipment and software are not needed to purchase and there is no need to employ persons to maintain computer system. Secondly, preparation time is saved then project of establishing CDF will be completed in advance. Third, all these cloud technologies are familiar to almost everyone, thus they are very easy to use and to realize collaboration in CDF without
D. Xu et al. / Framework of Concurrent Design Facility for Aerospace Engineering Education
483
any special training. If a classroom has equipped projector and large screen, and Wi-Fi is provided in campus, CDF education environment will be built easily and quickly in classroom by using those cloud servers that mentioned above.
4. Conclusion CDF is effective and efficient has been proven by design cases and experiences of many research teams in past twenty years. Some universities have also established their own CDF for academic research and aerospace engineering education. Essential requirements of a general CDF are analyzed by comparing collaborative engineering environments that are reported in the literature with respect to three specific aspects: software, hardware, and peopleware configurations. Some cloud computing technologies, include Google Drive, Dropbox, OneDrive, MENDELEY, Google Talk and skype, are presented to realize collaboration in CDF environment, with many benefits, such as reducing cost on hardware, software and human, reducing prepare time and easy to use. This simple and low cost CDF framework is adaptable to be implemented in classroom education of aerospace engineering.
References [1] Jeffrey L. Smith, Concurrent Engineering in the Jet Propulsion Laboratory Project Design Center, 98AMTC-83. [2] Aguilar, Joseph A., Andrew B. Dawdy, and Glenn W. Law, The Aerospace Corporation’s Concept Design Center, 1998. [3] M. Bandecchi, B. Melton, B. Gardini. The ESA/ESTEC Concurrent Design Facility. Proceedings of EuSEC 2000, pp. 329-336. [4] Julie C. Heim, Kevin K. Parsons, Sonya F. Sepahban, TRW Process Improvements for Rapid Concept Designs, 1999, pages: 325-333. [5] Joseph A. Aguilar, Andrew Dawdy, Scope vs. Detail: The Teams of the Concept Design Center, IEEE Aerospace Conference Proceedings 2000, pp. 465-481. [6] Robert Shishko, The Proliferation of PDC-Type Environments in Industry and Universities, 2000. [7] F. Pena-Mora, K.Hussein, S. Vadhavkar, CAIRO: a concurrent engineering meeting environment for virtual design teams. Artificial Intelligence in Engineering, No.14, 2000, pp. 203-219. [8] Donald W. Monell, William M. Piland, Aerospace Systems Design in NASA’s Collaborative Engineering Environment, IAF-99.U.1.01, 1999. [9] Michael N. Abreu, Conceptual Design Tools for the NPS Spacecraft Design Center, Master Thesis, NAVAL POSTGRADUATE SCHOOL, 2001. [10] Charles M. Reynerson, Developing an Efficient Space System Rapid Design Center. IEEE Aerospace Conference Proceedings 2001, pp. 3517-3522. [11] Karpati, G., J. Martin, and M. Steiner. The Integrated Mission Design Center (IMDC) at NASA Goddard Space Flight Center. IEEE Aerospace Conference Proceedings 2002, pp. 3657-3667. [12] Linda F. Halle, Michael J. Kramer, M. Denisa Scott, Space Systems Acquisitions Today: Systems Modeling, Design and Development Improvements, Integrating the Concept Design Center (CDC) and the NRO Analysis Center (NAC), IEEE Aerospace Conference Proceedings 2003, pp. 3647-3656. [13] Todd J. Mosher, Jeffrey Kwong. The Space Systems Analysis Laboratory: Utah State University’s New Concurrent Engineering Facility. IEEE Aerospace Conference Proceedings 2004, pp. 3866-3872. [14] Jan Osburg, Dimtri Mavris, A Collaborative Design Environment to Support Multidisciplinary Conceptual Systems Design, AIAA 2005-01-3435. [15] Thomas Coffee, The Future of Integrated Concurrent Engineering in Spacecraft Design, Research Report of Massachusetts Institute of Technology, 2006. [16] Hernando Jimenez, Dimitri N. Mavris, A Framework for Collaborative Design in Engineering Education, AIAA 2007-301.
484
D. Xu et al. / Framework of Concurrent Design Facility for Aerospace Engineering Education
[17] Schaus, V., Fischer, P., Ludtke, D., Concurrent Engineering Software Development at German Aerospace Center – Status and Outlook, Engineering for Space, No.1, 2010. [18] Daniel Schubert, Oliver Romberg, Sebastian Kurowski, A New Knowledge Management System for Concurrent Engineering Facilities. 4th International Workshop on System & Concurrent Engineering for Space Applications, SECESA 2010. [19] Fischer, Philipp M., Volker Schaus, and Andreas Gerndt, Design Model Data Exchange Between Concurrent Engineering Facilities By Means of Model Transformantion, 13th NASA-ESA Workshop on Product Data Exchange 2011. [20] Paulo Esteves, Emmanouil Detsis, Concurrent Engineering at the International Space University, 2011. [21] M. Marcozzi, G. Campolo, L. Mazzini. TAS-I Integrated System Design Center Activities for Remote Sensing Satellites. SECESA 2010. [22] First Studies of ASI Concurrent Engineering Facility (CEF). 4th International Workshop on System & Concurrent Engineering for Space Applications, SECESA 2010. [23] A. Ivanov, M. Noca, M. Borgeaud. Concurrent Design Facility at the Space Center EPFL. 4th International Workshop on System & Concurrent Engineering for Space Applications, SECESA 2010. [24] CO2DE: A Design Support System for Collaborative Design. Journal of Engineering Design, Vol.21, 1 (2010) 31-48. [25] Vasile, Massimiliano, Concurrent Design Lab in Glasgow, 2006. [26] Kazuhik Yotsumoto, Atsushi Noda, Masashi Okada, Introduction of Mission Design Center in JAXA, 2005. [27] Jonathan Osborn, Joshua D. Summers, and Gregory M. Mocko. Review of Collaborative Engineering Environments: Software, Hardware, Peopleware. International Conference on Engineering Design, ICED11, 2011. [28] Moises Dutra, Minh Tri Nguyen, Parisa Ghodous, An approach to adapt collaborative architectures to cloud computing. Advanced Concurrent Engineering, DOI: 10.1007/ 978-0-85729-799-0_19, Springer-Verlag London Limited 2011. [29] http://en.wikipedia.org/wiki/Google_Docs [30] http://en.wikipedia.org/wiki/Google_Drive [31] http://en.wikipedia.org/wiki/Dropbox_(service) [32] http://en.wikipedia.org/wiki/Windows_Live_SkyDrive [33] http://en.wikipedia.org/wiki/Mendeley [34] http://en.wikipedia.org/wiki/Google_Talk [35] http://en.wikipedia.org/wiki/Skype
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-485
485
Experience with Master Theses ran as Projects ir Jean Pierre Tollenboom1 Independant Consultant
Abstract. An experiment with master students in engineering has been performed for the second year. The school involved is KULeuven Campus De Nayer, a renowned school for industrial engineers in Belgium. The first experiment took place during 2012-2013 and involved about 35 projects and 45 students. This past academic year, 2013-2014, involved 41 projects and about 60 students. The objective of the experiment is to expose the students to modern techniques for project planning and monitoring. Every master thesis is conceived as a project for which a professional planning has to be set up. These projects are then monitored using the same tools and techniques as used in the professional environment. In this process the students are exposed to typical topics as:
• • • • •
setting up a clean project structure, the project tree defining the logics, the tasks dependencies estimating task durations, using rational techniques optimising the project flow by organising concurrent tasks when working in team, optimise the work load by organising concurrency in the tasks • comprehend the project processes and discovering their dynamics • experience the project reporting, learning to analyse the information and discover how one can draw conclusions from these. • discover on what aspect one can act, when some recovery of delays is needed. The whole project is intensively reported and all stakeholders, including professors, assistants, promoters and co-students are fully informed. In this paper we give a short introduction to Dynamic Project Control - DPC - the method and tools that were used to give substance to the project. Keywords. Concurrent Engineering, Education, Master Students Engineering, Project controls, Dynamic Project Control
Introduction This project has been initiated as a response to the author’s concern about the near complete absence of training in the matters of project management, and more particularly, in the quantitative methods associated with project management. The author considers this to be a major shortcoming in the education of industrial engineers.
1
Corresponding Author, President Jury De Nayer School for Industrial Engineers.
486
J.P. Tollenboom / Experience with Master Theses Ran as Projects
This is even more true considering the increasing importance of concurrent engineering, which as we know, requires refined and top level project management skills. Building on a newly developed method and tools for project planning and monitoring - DPC, Dynamic Project Control - we defined a method for master students in engineering that would expose them to all aspects of modern projects. This method has now been used for the second time and on a somewhat larger scale. The results are very promising, and we are now convinced that our approach opens the path to a simple though effective method to teach modern project scheduling and monitoring techniques to engineering students.
1. A brief introduction on the DPC method 1.1. Origin Over the past 10 years, the DPC method - Dynamic Project Control - has been developed from within and for the professional project world. Most of those projects are in the sphere of industry and infrastructure. Most of those projects use CE as the driving mechanism. The method is primarily known for its tools, basic and advanced, for the monitoring of the physical progress of a project both in its static and dynamic behaviour. There is ample documentation available on DPC. The most comprehensive documents can be found here: www.jptollenboom.com 1.2. Monitoring processes We start from the assumption that any project can be considered as being built from a collection of processes. Every process consists of a collection of activities. The physical progress of every process can be monitored by observing the physical progress of every of its activities and then applying a suitable aggregation algorithm. Ideally, a process should consist of homogeneous activities, i.e. activities of the same kind. The progress of every process has two aspects: x x
A static value: the current progress value A dynamic value: the current progress rate, or progress speed.
Both values can be used to assess the present status as compared to the originally scheduled status and to produce reliable predictions on the final outcome. In this sense, the values produced are feedback values that can be used according to the paradigm known in process control. 1.3. The DPC tools 1.3.1. S-curves Figure 1 is an S-curve. It is the graphical display of information on the process being monitored. Table 1 details the contents of an S-curve.
J.P. Tollenboom / Experience with Master Theses Ran as Projects
487
Figure 1. S-curve Table1. S-curve details 1
Horizontal axis
days elapsed
2
Vertical axis
% complete value (0-100)
3
Blue line
the progress line as scheduled
4
Gray area
the activity profile as scheduled
5
Black dotted line
the observed progress line
Computed from the scheduled base line Computed from the scheduled base line Registered progress line
6
Fat black dot
the latest status point: where we stand now
Registered last status point
7
Green dot
a marker at 50% progress and 40% duration
Computed
8
Red dot
a marker at 50% progress and 60% duration
Computed
9
Black dot
midpoint, 50%, 50% duration
Computed
1.3.1.1. Reading the static values The static value of the registered progress is displayed by the last status point (item 5 in Figure 1). The coloured bands correspond to different degrees of safety towards timely completion of the process: x x x x x
Blue area: ahead of schedule Green: very safe position Yellow: safe position, but should not degrade Orange: unsafe position, should improve Red area: very unsafe position.
The bandwidths are computed according to strict rules.
488
J.P. Tollenboom / Experience with Master Theses Ran as Projects
Figure 2.The areas
1.3.1.2. Reading the dynamic information: check the trend line The trend line is the average progress line of the observed track (the black dotted line). Its slope is the actual average progress speed (in % complete per day) Compare this slope to the average slope of the schedule progress line (the blue line) for the same period.
Figure 3. Dynamics
Rules x x x x x
V1 is the progress rate of the scheduled S-curve V2 is the average progress rate of the track If V2V1, then delay will decrease
J.P. Tollenboom / Experience with Master Theses Ran as Projects
489
1.4. Forecasting Two forecasting methods have been developed: x x
Linear extrapolation: the end date of a process is estimated from the current linearised progress line Affine transformation: the originally scheduled progress line is transformed to match the observed progress line, using affine transforms.
From here an estimate of the end date, under steady conditions, can be obtained.
2. Educational project with a school for industrial engineers 2.1. Introduction As said before, there is to our opinion, a lack of exposure of future engineers to the concepts and techniques related to project management. Sometimes there is some exposure, but it then often limits itself to trivialities and eventually some basic training in the usage of a scheduling software. As we all know, learning to use a scheduling software has nothing to do with mastering CE driven projects. So the idea was to set up an environment in which the Master students would be exposed to all the ingredients of CE driven projects. We chose to have them run their master thesis as a full fledged project. Doing this they would have to go through all the steps of the real world projects: x x x x x x
Understanding the processes Understanding the interactions Coordinating the activities Deterministic planning Sequentially deterministic planning Process monitoring
This would all happen on a small scale. But more importantly: the complete picture would be revealed to them in this process. 2.2. Description The Master students in industrial engineering of the KU Leuven Campus De Nayer must produce a Master thesis by the end of their academic year (June 2014). They have to present their work in front of a jury who then decides on the grade. It happens that the presented work is found insufficient and that complementary work is requested. It also happens (on rare occasions) that the work is rejected and that the student leaves the school without a diploma. For the second time, the last years Master students ran their Master thesis work as a project. They used Smartsheet and DPC to schedule and track the progress of their work. The first experiment was ran in 2012-2013 and involved about 45 students and 35 projects. This year, the academic year 2013-2014, about 60 students were involved
490
J.P. Tollenboom / Experience with Master Theses Ran as Projects
in 41 projects. Some Master projects were ran by a single student, some were ran by a pair of students. A Master work has a standardised structure: x x x x x x x x
literature study theoretical part designing an experiment or prototype building the test or demo device run experiments report the results write up the final book present the work to the jury.
Such Master thesis work is a textbook example of a project. All aspects of project controls can be applied, be it on a miniature scale. The duration of the Master work is almost a full academic year. So, this environment is ideal for students to learn by discovering. 2.3. Organisation The college signed up for a Team 3 plan with Smartsheet. This grants them 3 sheet creators and 150 sheets. One assistant has been appointed as key-user. The key-user creates all Gantt2 sheets and shares them with x x x x
the student(s): one sheet one student or pair of students the head of department the promoter(s) of the master thesis the DPC admin
At the start, the Gantt sheets are empty copies of a template. The students must construct their schedule in these sheet instances. There is one "job list" created by the DPC admin. This "job list" lists all projects, their Gantt sheet id's, and some other admin data. This list is shared with the key-user; it drives the DPC engine. The students attended two introductory lessons of 1.5 hrs. The first lesson was on the project environment and general context. The second lesson was on detailed technical aspects of project scheduling, the use of Smartsheet, and how to read the reports. One introduction of 1.5 hours was also given to the promoters involved in the monitoring of the students. 2.4. Educational value Unfortunately, it often happens that Masters in industrial engineering, not to speak of the M SC's in engineering, finish their studies with little if any insight in project scheduling and control techniques. Though, in many cases, their first and last job during their career may well be running part or the whole of a project.
2
Gantt sheet: the graphical representation of a schedule with tasks as time bars.
J.P. Tollenboom / Experience with Master Theses Ran as Projects
491
And so we witness, time and again, young engineers starting from scratch and repeating the same mistakes all over, eventually getting stuck in the standard set of bad habits. So we think that it is a necessity to offer engineering students a thorough exposure to modern project scheduling and control techniques. We think that the Smartsheet-DPC couple offer an ideal environment for students in order to become aware of the concepts related to project scheduling and controls, and to acquire experience in these matters. These are the major reasons: x the system is very accessible x no time is lost on learning to master a zillion buttons or key-strokes x from day one focus can be put on the project scheduling and control techniques x the scope of the project is, by its nature, limited in volume, still displays the same structure as the most complicated projects. x the techniques used may be a reduced set of what is currently used in the professional world, still all the basics are covered and combined into a coherent system 2.5. Learning by discovering The exercise in se is one of self-control. The student only schedules and controls his own work. He must track, analyse and report on his own progress. When corrective actions will have to be taken in order to recover incurred delays, he will talk to himself. This experience will make him discover the difficulties of being objective, the common pitfalls when reporting progress, and many things of the same order. Later, when he will manage people in a project context, he will know x x x
what can be asked to do in terms of scheduling and control what can be expected as reaction, as problems what pitfalls are to be avoided and how
If project supported teaching has to be introduced, this is the ideal path. 2.6. Value for academic personnel There are two levels on which valuable content is created. x x
The short term: how are the individual project teams performing The long term: how is the project-related behaviour of students evolving.
In the short term, the monitoring of the students is done by inspecting their progress reports, and by matching these against observed progress, as is normally done by the thesis promoters. This leads to evaluations and classifications based on verified facts. In the long term, as experience and project histories pile up, a wealth of statistical information can be extracted. Patterns can be detected, e.g. what type of subject lead to better results than others, etc.
492
J.P. Tollenboom / Experience with Master Theses Ran as Projects
2.7. Lessons learned from the first experiment The first experiment involved some 45 students and 35 projects. The exercise was completely free: no grades, no points, no credits were to be earned. Only the personal satisfaction of learning something extra was put forward. At the end of the road we saw this: x x x
15 % didn't bother to start with scheduling 85% did set up a schedule and started tracking progress 20% did so till the very end of their project, the balance stopped somewhere "en route".
We noticed that those students that started to show substantial delays, stopped tracking progress. How strange it may sound, but this is widespread attitude also found in the professional world. Of the 20% that went all the way, all produced a high quality end work. A few displayed a quasi professional project related behaviour. One of the best performances is displayed in Figure 4.
Figure 4. Example Student
These are the specific lessons drawn from this example: x x x x
Area 1: tracking was started a bit late, so there was a bad surprise as a substantial delay appeared Ѝ Start tracking at an early stage so to avoid surprises Area 2: after a recovery effort was done with good results, the progress stagnated. This can be traced back to the concurrent exams period Ѝ Do not overload your resources, progress targets will not be met Area 3: period of good progress rate, with a little delay Ѝ When the progress track is parallel to the scheduled track, the progress rate is OK. Area 4: a recovery effort was initiated well in time (about a month before the end date) Ѝ Do not wait until the last moment to start extra efforts: it pays in quality of the end product.
J.P. Tollenboom / Experience with Master Theses Ran as Projects
493
2.8. Lessons learned from the second experiment This time the experiment was ran under other conditions: x x
participating was mandatory the goodness of project control contributed to the final grade.
Table 2 gives an overview as per end of work (end of June 2014). Table 2. Statistics # 41 7 21 13
% 100 17 51 32
Comment Master projects Excellent scheduling and tracking Good scheduling and tracking Had problems
From Table 2 we can see that about 68% of the students produced a good to excellent project schedule and track record. This is astonishing and substantially better than the industry average. These results show that the method, tools and techniques chosen are obviously absorbed quite quickly and to a satisfactory degree. We also noticed a pattern, one that was expected in fact: x x
Those students with a good tracking result produced a good end work, high quality, well polished text. Those with a poor tracking result produced on average a lesser quality end work
We were also able to predict ahead of time who was likely to produce a high quality end work and who was not. Figure 5 displays the S-curve of the project who won the “Golden S-curve 2014”.
Figure 5. Best performance
Figure 6 shows one of the poorest performances of the group.
494
J.P. Tollenboom / Experience with Master Theses Ran as Projects
Figure 6. Poor performance
2.9. Overall conclusions We think that we may conclude that we found a good technique to expose engineering students to most if not all aspects of modern CE driven projects. We show that this can be realised with a minimum of “ex cathedra “teaching, and lots of learning by discovering. We also show that the effect of the exercise on the students, the degree of comprehension, the degree of mastership of the proposed techniques, can also be monitored as such by the educators. This comes with an extra bonus: when it becomes clear that a student runs into problems, this can then be discussed on a fact based basis. We are now convinced that our approach opens the path to a simple though effective method to teach modern project scheduling and monitoring techniques to engineering students.
Part VIII Simulation of Complex Systems
This page intentionally left blank
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-497
497
Simulation on the Combustion System Work Process for Internal Combustion Engine by Using KIVA-3V SHI Yana,1 , LIU Yongfenga,1, JIA Xiaoshea,1, PEI Puchengb , LU Yongb and YI Lib a
School of Mechanical-electronic and Automobile Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China b State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100084, China b New Technologies and Materials Department, BAIC MOTOR Corporation, Ltd. Beijing Automotive Technology Center, Beijing 101300, China
Abstract: In order to simulate the combustion system work process for an internal combustion engine accurately, the KIVA-3V software is used to generate the combustion system meshes. During the process of mesh processing, the K3PREP generator is used by the block-structured technology to generate the 60° computational meshes of the combustion system based on the modified 4JB1 engine. Three steps are included in mesh generation. First, calculation area is divided into several blocks. Second, these blocks are numbered and defined by the structure parameters separately. At last, the computational meshes are generated by patching blocks together in a certain order. Variables such as cylinder pressure, cylinder temperature, NOX and SOOT emission are predicted and analyzed by using single injection strategy. The results show that the K3PREP generator can provide a good foundation for the subsequent of internal combustion engine simulation, and It was found that the production of NOX begins from the moderate burning period, reaches a peak quickly and keep constant. The production of SOOT is mainly in the late of fast burning period to the moderate burning period and most of the SOOT is oxidized. Keywords: Internal combustion engine, Combustion system, Simulation, KIVA-3V
Introduction The internal combustion engine is widely used in the automotive, marine, engineering machinery and other industries with its high efficiency and strong safety performance. But while being widely utilized, there are some issues like the excessive emissions, pollution of the environment and endanger people’s health [1]. The research and development, therefore, high efficiency, low fuel consumption, minimal pollution of the internal combustion engine became the researchers target. Combustion in cylinder of internal combustion engine is the main influencing factors of its economy, power performance and emissions. So the study on the combustion process in the internal combustion engine cylinder has always been the focus of the internal combustion engine industry [2]. At present, the study of the 1
corresponding author: LIU Yongfeng; E-mail :
[email protected]
498
Y. Shi et al. / Simulation on the Combustion System Work Process
combustion process in the cylinder of an internal combustion engine mainly includes test method and the numerical simulation method. Among them, the test method is to conduct a study by using the relevant test technology on the test machine. But the test method needs to have specialized equipment, a large number of human resources and financial resources, and the test results are only surface data, it is difficult to reveal the deep physical phenomena. So the test method has been difficult to meet the needs of technology. Along with the development of computer technology and the computational fluid dynamics, it has been feasible to use the numerical simulation method [2] in the field of the internal combustion engine. In the process of numerical simulation calculation, girding the integral space is the basis of the numerical solution [3], That is to say, must have a good quality of the grid to ensure the accuracy of the results of numerical calculation. Among the numerous simulation software, the KIVA program [4,5] which invented by the Los Alamos Laboratory has been developed the most representative [6]. It uses the Arbitrary Lagrangian - Eulerian method to carry on the finite difference calculation and uses the arbitrary hexahedron grid units to solve control variables to space discrete model. It makes the program of the object in the calculation of the geometric settle have very enhanced features and functionality. Nowadays, the KIVA program has included KIVA -ϩ, KIVA, KIVA -3V and all kinds of improved versions [4], which represents the latest achievement of the internal combustion engine simulation today [7]. In terms of finite element modeling, the former processor K3PREP is mainly used in simple or simplified engine geometry, for some of the more complex shape structure is incapable of action. This caused the K3PREP application scope is narrow, long computing time and so on, these shortcomings make K3PREP hard to generate grid can meet the needs of engineering practice. However, some professional finite element modeling software such as ANSYS ICEM CFD has powerful functions. It provides more than 100 kinds of solver data interface, using hexahedral modeling function and application of the principle of grid topology can be interacts with users friendly. Construction arbitrarily complex shapes of computational grid in the visualization of state. The whole engine is decomposed into several simple areas, Provide the tag set about boundary conditions, geometry and geometric surface. Compared to the finite element analysis software such as ANSYS and NASTRAN, Kiva is aimed at the heat flow phenomenon of the simulation of internal combustion engine. They have different application fields.
1.Mesh Generation Technique Mesh generation technique is an important part of the numerical heat transfer and computational fluid dynamics. In the present working cycle of the CFD&NHT, the time needed for mesh generation accounted for about 60% of the total time or so. The quality of mesh affects the accuracy of numerical results directly, even affect the success or failure of the numerical calculation. 1.1 The classification of the grid cell The cell is the basis of the grid, in the structured grid, we commonly used the quadrilateral element and the hexahedral element. But in the unstructured grid ,we use
Y. Shi et al. / Simulation on the Combustion System Work Process
499
the hexahedral element, tetrahedron element and pentahedron element. Figure 1 shows the commonly used 2D and 3D grid cell.
(a) triangle
(c) tetrahedron
(b) rectangle
(d) hexahedron
(e) pentahedron
Figure 1. 2D grid cell and 3D grid cell.
1.2 Mesh types Mesh types can be divided into two categories: structured mesh and unstructured mesh. Structured mesh is that the mesh which has the same number of adjacent units in the internal nodes. In the two-dimensional space, each internal node is shared by four quadrilateral grids. In the three-dimensional space, each internal node is shared by eight quadrilateral grids. Structured mesh has a good quality and simpler data structure. Unstructured mesh is corresponding to the structured mesh, It refers to that the node in the mesh computing area does not have the same adjacent units. Unstructured mesh is more intuitive and more accurate. But it's meshing algorithm is complex, so efficiency is not high. 1.3 The process of mesh generation Both the structure mesh or unstructured mesh, all need according to the following process to generate the grid. 1) Establish geometric model. Geometric model is the carrier of the grid and boundaries. 2) Divide mesh. Use the specific mesh type, mesh units and mesh density to divide the geometric model. 3) Specified boundary area. Specify the name and type for each region of the model.
500
Y. Shi et al. / Simulation on the Combustion System Work Process
2. Mesh Pre-treatment The KIVA-3V package includes a basic grid generator, K3PREP. The K3PREP uses the block-structured grid technology [8], dividing the complex geometry into many simple geometry areas, each region generated its own grid separately and then patching them together into a whole. Three steps are included in mesh generation: First, calculation area is divided into several blocks. Second, these blocks are numbered and defined by the structure parameters separately. At last, the computational meshes are generate by patching blocks together in a certain order. Before patching together, every piece of the grid have their own independent logical coordinates (i, j, k), and physical coordinates(x, y, z), they are corresponding to each other. Logical coordinates are used to represent the location of the grid node and the grid cell, as well the logical relationship between them. Physical coordinates are represent the spatial position of each cell. The 8 nodes in each grid cell can be expressed as I1, I2, I3, I4, I5, I6, I7, I8. For the node I4 in each grid cell, using the arrays I1TAB(I4), I3TAB(I4), I8TAB(I4), IMTAB(I4), JMTAB(I4) and KMTAB(I4) to describe the neighboring relationship of the spatial adjacent six grid nodes [9]. Figure 2 shows it.
Figure 2. Nodes neighboring relationship.
In the computational area, each grid cell and its adjacent cell share one cell face, so the three arrays BCL(I4), BCF(I4), and BCB(I4) can represented left, front, and bottom cell faces of I4 node respectively. Figure 3 shows the three feature faces. For each grid cell node I4, we can use the neighboring relationship of the spatial adjacent six grid nodes and the three area vector to determine it.
Y. Shi et al. / Simulation on the Combustion System Work Process
501
Figure 3 .Three feature faces.
3 The composition and the structure of the KIVA-3V The standard KIVA-3V package contains three basic parts: pre-processor, hydro code, post-processor, each part is a relatively independent calculation program. Figure 4 shows the structure of the KIVA-3V.
Figure 4 Structure of KIVA-3V.
4 The Mesh Based on the Modified 4JB1 Diesel Engine This paper chooses the modified 4JB1 diesel engine as the simulation object, the main parameters were shown in Table 1.
502
Y. Shi et al. / Simulation on the Combustion System Work Process
Table 1. The modified parameters of 4JB1 diesel engine Number of cylinders
4
Stroke(mm)
102
Speed(r/min) Squish clearance(mm)
3600
Bore(mm)
93
Rod length(mm)
168
Torque(N·m)
224
0.15
Displacement(L)
2.771
Compression ratio
18.2:1
Swirl ratio
2.4
4.1 Generating computational mesh Input the corresponding data into the file which named IPREP. This file contains the basic technical parameters of the engine, boundary conditions, the information of each logic block and the block patching commands, etc. Put the K3VPREP.exe and the input file named IPREP in the same folder, run the K3VPREP.exe, there will be a DOS window shows the operation information. If the quality of the grid is fine, which means there was no negative grid and inverted grid exist, then can be used for the following numerical simulation. The 3D computational mesh which generated by the k3prep module, the central angle is 60 degrees. The number of the computational grid cell is 45875, the number of the node is 46822. Figure.5 shows the 3D computational mesh in 60 degrees and the planform of the computational mesh in 60 degrees. The Figure.6 shows the computational grid has three blocks. Block 1 is the recess part of the combustion chamber, block 2 and block 3 is the space volume on the top of the piston.
(a) The 3D computational mesh in 60°
(b) The platform of the computational mesh in 60°
Figure 5 The computational mesh based on the modified 4JB1 engine.
Y. Shi et al. / Simulation on the Combustion System Work Process
503
Figure 6 Block structure.
5 The analysis of macroscopic parameters in cylinder The paper used the single injection strategy situation, fuel injection begins at 10rCA BTDC and ends at 4rCA ATDC, Lasted 14rCA. Macroscopic variables such as cylinder pressure, cylinder temperature, NOX and SOOT emission are predicted and analyzed. Figure 7 shows it. Figure7(a) shows that when the average pressure is close to the top dead center, there is a inflection point on the curve. Followed by a rapid increase in cylinder pressure, which means that it through the premixed pried, had already started burning process. The average pressure in cylinder reached the maximum 10.3 MPA at ATDC 10 r CA. Followed by the piston downward, gas expansion in cylinder, cylinder pressure drop. Figure7(b) shows that when the average temperature is close to the top dead center, there will also has a inflection point on the curve. Followed by a rapid increase in cylinder temperature, which is due to the burning process after the premixed pried. The average temperature in cylinder reached the maximum 2190K after TDC 10rCA. Followed by the piston downward, gas expansion in cylinder, cylinder pressure drop. Figure7(c) is the NOX emissions curve. The curve shows that after TDC 5rCA , the NOX emissions increase rapidly, and reached the final peak of 97 percent after TDC 20rCA. After TDC 39rCA, the piston has been down for a period of more distance, more temperature decreases in cylinder, NOX emissions remained constant. Figure7(d) is the soot emissions curve. The curve shows that after TDC 8rCA, the soot emissions reached the peak. After this period, most of the SOOT is oxidized.
504
Y. Shi et al. / Simulation on the Combustion System Work Process
(a)The average pressure in cylinder
(c)NOX emissions curve
(b)The average temperature in cylinder
(d) soot emissions curve Figure.7 macroscopic parameters in cylinder.
6 Conclusions In order to simulate the combustion system work process for the 4JB1 diesel engine accurately, the KIVA-3V software is used to simulate the combustion system . This simulation method is not restricted by test environment and test conditions, is more quickly than the traditional test methods in the aspect of data acquisition. For example, to study the change of inlet swirl, change the shape of the combustion chamber can bring what impact on the combustion and performance, simulation calculation has incomparable superiority. During the process of mesh processing, the K3PREP generator is used by the blockstructured technology to generate the mesh of 4JB1 combustion chamber. The former processor K3PREP is mainly used in simple or simplified engine geometry, for some of the more complex shape structure is incapable of action. Through the study of the mesh generation technology㸪we found that㸸
Y. Shi et al. / Simulation on the Combustion System Work Process
505
(1) Structured grid is easy to generate, however, it is difficult to adapt to complex computing systems because of all the grids generated by using the same rules. The unstructured grids require the specialized technology to generate, and this kind of methods increases the calculation time. (2) The block-structured grids are divided into several blocks by their own rules. Compared with the unstructured grids, the block-structured grids have been greatly enhanced the computational efficiency. After analysis of macroscopic parameters in cylinder, we found that: (1)The production of NOX begins from the moderate burning period, reaches a peak quickly and keep constant, NOX emission is affected by temperature significantly. (2)The production of SOOT is mainly in the late of fast burning period to the moderate burning period. In the post burning period, followed by the piston downward, with the increase of oxygen concentration, most of the SOOT is oxidized.
ACKNOWLEDGMENTS The study was sponsored by the National Science Foundation (51176082), China and The Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions (CIT&TCD20140311).
References [1] Qiong JIA. Study on Grid and State Equation in Diesel Engine Working Process Simulation[D]. Dalian. Dalian University of Technology.2012.6 [2] Hui CHEN.A Numerical Simulation of Medium-Speed Marine Diesel Engine In-cylinder Using KIVA-3V[D]. Zhenjiang. Jiangsu University of Science and Technology.2011.12 [3] Sun SUN. An Investigation to Application of Grid Generation Method in KIVA-ϩProgram[R]. Journal of Wuhan University of Technology.2002.26㸦6㸧 [4] A.A.Amsden,KIVA-3:A KIVA Program with Block-Structured Mesh for Complex Geometries㸪 Los Alamos National Laboratory report㸪LA-12503-MS,1993 [5] A.A.Amsden,KIVA-3V:A Block-Structured KIVA Program for Engines with Vertical or Canted Valves, Los Alamos National Laboratory report,LA-13313-MS,1997 [6] Hua CHANG. The Research on Algorithm for 3D Hexahedral Mesh in KIVA Code[D]. Dalian. Dalian University of Technology.2012.6 [7] Wei ZHANG.A Numerical Simulation Study of Diesel Engine In-cylinder Flow and Combustion Using KIVA-3V[D]. Hefei. Hefei University of Technology.2014.12 [8] Rijing DONG. The Numerical Simulation of Soot Formation and Oxidation in the Direct Diesel Engine Based on the KIVA3[D].Dalian. Dalian University of Technology.2008.12 [9] Jianguo LIU .Investigation on Numerical Simulation of Combustion Process of Diesel Engine Based on KIVA-3V[D].Tianjin. Tianjin University.2006.2 [10] Congfa JIANG, Ronghua HUANG, Xinyong MAO. An Investigation to Improvement of Grid Generation Method in KIVA Program [R]. Journal of Huazhong University of Science and Technology. 2000.28㸦2㸧 [11] Yongfeng LIU , Pucheng PEI, Asymptotic Analysis on Autoignition and Explosion Limits of Hydrogen-Oxygen Mixtures in Homogeneous Systems㸪International Journal of Hydrogen Energy,31(2006),639-647 [12] Y. LIU,Y. ZHANG, Optimation Research for a High Pressure Common Rail Diesel Engine Based on Simulation, International Journal of Automotive Technology, Vol. 11, No. 5: 625-636,2010
506
Moving Integrated Product Development to Service Clouds in the Global Economy J. Cha et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-440-4-506
!#*-!@-#Y\^ _Y_*-^`*!@*-*|~!~ -~-*Y\ !`!--~!!-!_- ¡ !"#
~!#*-! ^-#Y\ !~ ¢¢~ ^ !#-~~£-* *! #~!¤ -* ~~! -* Y\ ¥¦ !~ #¢*| -~ - ^-~ §*-\ ¨© #* ^ !#-~~ ¢¢| #*!¤ ! - ª-* ¥«¦-^-!~^`-~*-~¢-!¢¢--#~- -!! !~ -~ -- ** ^# *!-~¤ ¬ #* !~ #! Y! ~!#*- ^ ¢Y ¢*- ~!#*- ¢-! ¢~~ *|¤¨¢!#~-®¯¡--®®¯¡¢Y¢*-~|~#~Y -^-#Y\-¢ª!^-~-^*¨!*~!#*-!^--^¢-! ¢~~ ! ^* -! Y!*~ \¢! #¢! --| - - -¢-**ª*¤ ~!#*-! ^* ¢¢| #*! -* ~~! -*Y\¥¦*!¢Y¢*-
*! - ~!#*-! !~ !!-* !± ^ --*|²! - ¢!#!²! ¢^#- ^ #¢*¨¢~~~|~#¤ @!~ ¢!!¢*#*! !~ ^-#-* - - --*|~!~ - * ^ ª-|! !!~ -! #- ^ ~!~ ~#~¤ Yª ! #¢*¨ ~|~# ~!#*-! ^!~ ¢!!¢* #*! !~ -**| !#£~#! - --| !~ -¢-* ! *! -*£!# -¢¢*!-!¤ -*£^! ¢Y ¢*- !#-~~ !~ - !#¢- *-~~ ^ Y-* | ~^*!¢Y-!¤*!!#-~~Y!-*!ª!-*-*£ ^! ¢Y ¢*- !~ - ¢#!~! ~*! ^ |*! Y-~ #-!-*~ ^# Y~ ¢~--~~~--#!~~!~^`¨`³-`³¤^!^-*£ ^! ¢Y ¢*- £^!! ¢*- -~ !ª Y!*| -! - Y* ´³µ¶·¤ Yª ¢¢| ^ !#-~~ !~ -**| ~-* ~! | - ** ^# !^^ --~ ! !^^ Y- - ! !^^ ¢-! !!~¤¡!-ª-^#-~#!±~*!#!!^^* #¢~!! !#¢- ^ ^* ¢¢| #~! Y!** ¢! # ^^!!*|-±!\*|¤
~¢! ¸ ! ~~!- _^~~ ! ! !- ¹!ª~!| -- µ! !!-º£#-!*¸ ~» ¤¤¤
J. Song et al. / A Concurrent Simulation Framework of Power Plant for Online Fuel Analysis
507
* - # ^ *!-~ -ª ¨¢* ^!~£¢!!¢* #* ^ ¢¢| ^ !#-~~ ^* Yª #-* ¢¢| ^ !#-~~ !~ -**| - #¢*¨ *!- *-!~!¢ Y -! ª-* - *#-| #¢~!!¤ !^!!-* -* Y\ ¥¦ !~ - *-~~ ^ §*-\ ¨© #*! -¢¢- ^ *!- ~|~# Y! !~ # ^-~ - ^!~ ¢!!¢* ~!#*-! ´®·¤ !~ ~-Y¢¢~-~!#*-!^-#Y\-~-*~~! -*Y\¥¦#!!#-~~£-*£^!!~!#*-!Y!-*! ~!#*-^¢Y¢*-¤!~~*-~#*!*~!Y\- -! ª| ^-~ - -~ Y!*| ~ ! *! ¢~~ #*! ¢!! - *´¼½·¤*~#~~-~_-@--*~~ #* ¢¢| ´¾· - ¢ª! - -~! ^-#Y\ ^ !#¢ª! #*!-¢¢-^#*!-¢¢*!-!-~!#*-!¢~¢!ª¤ ¬¢-¢!~-!²-~^**Y~¤!*!~|^-* Y\~ - ! ! ³ #*! ^ #-* ¢¢| ^ !#-~~ !~ !~~~¤!µ~!#*-!^-#Y\!~¢~¤!¶!~ ~#~!#*-!¨¢!#~-*~!!~!!®¤
!~-#!^!@£-~^£^Y--*Y\~-*!~| ^£*!-~~!´®·¤-**|-~~!-~!-~¸
À $ ( ) = [ # ¿ ] =
+∞
³ ³
# ¥ # ¦ #
−∞ +∞ −∞
¥¦
¥ # ¦#
Y !~-¢-!*-#-~ª-*^ª-#ª-!-*¨- # ~-*- -#ª-!-*º ¥ # ¦ !~ ! !~ ¢-!*!| ~!| ^!¤ $À ( ) !~ !!-*#-^|!ªY!--*~-**~~!^|¤ ^ ¥ # ¦ !~\Y~~!-~!#-|^!¸
$ ¥ ¦ =
¦$ =
¦
=
%
§ % ³ · ¸ ¨¢ ¨¨ − ³ ¸ © ³σ ¹ § % ³ · ¸ ¨¢ ¨¨ − ³ ¸ © ³σ ¹
¥³¦
= ¥ − ¦ & ¥ − ¦ ¥µ¦
% !~-^!~-*-^!¢~!*!-!~-Y - ¤
~--Y\~#!^- σ !~\|-*|¢--# ~¤ *- σ #-| ! - # ~# ~~! ~^- *~~ -¢¢¨!#-!--|--~#-**#-|¢ª!!-¢¢¨!#-!--| Y~~#~^-¤~!-* σ -~|#~~-~*#¤ !"#$ '*+*%
~^--^#*!#-*¢¢|^!#-~~^#¢*!~*!-~ ´¯ ·¤ ** -- - -!ª ! Y- ^ !!¤ ¬ -- ~ -!~ ¼µ ~-#¢*~ª!Y~¢~-~#Y-~#-!-*~¤*#-|-¥¦
508
J. Song et al. / A Concurrent Simulation Framework of Power Plant for Online Fuel Analysis
| ¥¦ ¨| ¥`¦ ! ¥¦ ~*^ ¥¦ -~ ¥¦ - ~¢! !-«-*¥«¦-~*-~¢--#~^~!!#¢¢|^ !#-~~¤¬-**!~~¢-^!~--~¤ % #¢~!!-«^!#-~~~-#¢*~¥¢-¦
³ µ ¶ ® ¼ ½ ¾
&' )*+ ³¤¯µ ³¤¯½ ¤¼ ¤¾ ¤³³ ¯¤¼³ ¤¯¾ ¤® ¶
, )*+ ¶¾¤® ®¯¤µ¼ ¶¾¤¶½ ®¯¤¯ ¶¾¤½¾ ¶¾¤½® ¶¾¤¶¾ ®¤µ ¶¤®
' )*+ ®¤®³ ®¤ ®¤¾® ¼¤³ ®¤ ®¤½¼ ¼¤³ ¶¤¶ ®¤¼¶
)*+ ¯¤®³ ¯¤®¶ ¯¤¶¼ ¯¤® ¯¤¯® ¯¤¯¼ ¯¤® ¯¤³¶ ¯¤
- )*+ ¶³¤¶³ ¶¯¤¼³ ¶¯¤¼¶ ¶¤µ ¶³¤³¾ ¶µ¤® ¶¤¾½ µ®¤ ¶¤µ¼
& )*+ ¯¤¯µ ¯¤¯µ ¯¤¯µ ¯¤¯µ ¯¤¯ ¯¤¯ ¯¤¯¶ ¯¤¯µ ¯¤¯µ
''. )"%/0+ ½µ½¶ ½¶½ ½¶¯¾ ½¶½¯ ½µ½¶ ½¶³¾ ½¶µ³ ½¶µ ½¶¾
'*'*-4 7"%"&
-~ ±¤ ¥³¦ - ±¤¥µ¦ - !~ -**| ~! -~ - ^£*-| Y\ Y!-!~!¢*-|¢-*-|~##-!*-|-¢*-|¥@!¦¤ !¢ *-| #-~ !¢ ¢--#~ - ¢ ¢--#~ - ~ -~ 8 ¥ Á³Â#¦ - # ¥ Á#Ã#󤤤¦¤ ¢ ¢- 8 ^ ~-#¢*~ !~ ~ - - -! φ ¥ Á³Â¦ - ¢ ¢- # !~ ~ - -! - ~¢! Y! 4 ^ φ ¤ -**| # ^ ~ - ! ¢- *-|!~±-*#^-!!~-#¢*~Y!--^**|~*¤¬Y ~##-!^!~ 4 - !!*-|Y!**~#¢#- -#!-^±¤¥¦¤@!-**|~~!|--!ª-¢*-| ~!±¤¥¦¤
$ ^£-~#-*¢¢|#*^!#-~~^*¤
@#*!#-*¢¢|^!#-~~^*`-~-~¢! ! - «-* ¥«¦ - ~* -~ #-~ !¢ ¢--#~ 8 - # ¤ ¬ ~!#-!-«-*¥«¦!~¢ª-!-* #À ¤ ¬ -!! !~ ¢^# Y! ¶µ ~-#¢*~ - ~ ³¯ ~-#¢*~ ^ ~!¤ ! !~ - ¢- -~ !! Y\ -!! ~-#¢*~ ~* ¢~-!ª¤ ~~ ª-*!-! !± !~ -¢ ~* ¢!#-* ~# ^-¤ ¬#-\#*##¢-!!*-|##|~-!!
J. Song et al. / A Concurrent Simulation Framework of Power Plant for Online Fuel Analysis
509
¢~~-~#~!~-#¢*~--ª*-~!#-!-!!~- ¢!\ ~# ~-#¢*~ Y! *Y~ ~!#- ^# -!! ~ -~ ~! ~-#¢*~¤ ¬!~-¢¢--!#*-^!-!!-~!~-#¢*~Y**Y! *-!ª*~~-¯¤½Ä¤@!³^!~¶µ~-#¢*~--!!~-#¢*~- *-~³¯~-#¢*~-~!~-#¢*~Y!!*¤*|#*!¢~~!~ ~-#¢*£~¢!^! Y ¢ -ª - # -* -¢¢- Y! !-~! ^ -ª-!*-*~-#¢*~!^¤
!"!" #!
& !'#(
% % $% $%
$ !#-~~#*~!#-!¤
1 , &#$# ;*+* " \ "" & #:7¥³¯¯¶¦¶Ì®®¤ ´¾· Å-_ Ê- -- - ! ¡ _!! ^ !#-~~ -! Y! |! -*Y\N! "-> -4 7";18:¥³¯³¦¼µ½£¼¶¶¤ ´¯· -# _¤ ~*| _¤ # ¤ Å ¤ @** ¤ ~ Ť ˤ - ¡!~** ¤ | ^ #¢~!!-*-~!!#-~~@~\~¹¢- %# ` #